52 weeks of BetterEvaluation: Week 9: Addressing complexity
There is increasing discussion about the potential relevance of ideas and methods for addressing complexity in evaluation. But what does this mean? And is it the same as addressing complication?
For example at the Community of Evaluators South Asia’s Evaluation Conclave in Kathmandu, a number of sessions referred to the importance of addressing complexity, including the SEAChange Climate Change session on Complexity and Attribution, and Michael Quinn Patton's keynote address.Videos of these events are available through SEAChange's YouTube channel:
But what do we mean when we talk about complexity in evaluation?
Common usage of the term 'complex'
Very often these discussions focus on complicated aspects of interventions - such as long causal chains with many intermediate outcomes, or outcomes that can only be achieved through a 'causal package' involving multiple interventions or favourable contexts, or interventions involving multiple implementing agencies who have multiple agendas. This definition has been evident in some of the medical literature – for example Craig, Peter, et al. "Developing and evaluating complex interventions: the new Medical Research Council guidance." Bmj 337.sep29_1 (2008): a1655-a1655, which focuses on evaluating interventions with multiple components.
Distinguishing between 'complicated' and 'complex'
However, two classic papers by Glouberman and Zimmerman (2002) and Kurtz and Snowden (2003) suggest it is instead useful to distinguish between ‘complicated’ and ‘complex’.
Complicated interventions (or aspects of interventions) refers to having many components, needing expertise and co-ordination. Metaphors of complicated aspects of interventions are likely to be images of intricate machines, where lots of components work together in predictable ways to produce a result.
Complex interventions (or aspects of interventions) are not predictable. They involve emergent and responsive interventions and causal processes which cannot be completely controlled or predicted in advance.
What this might mean for evaluation
The implications of these ideas for evaluation, and especially for development evaluation, are profound - especially when used to classify aspects of interventions rather than whole interventions. Interventions can have some simple aspects, some complicated aspects and some complex aspects, and it is more useful to identify these than to classify a whole intervention as complex.
An intervention without any complicated or complex aspects would have a clearly defined and agreed intended outcome, standardized and stable implementation processes that work through a single causal path to achieve the intended outcome, and would be implemented by a single organisation, without significant contributions from other organisations. It is reasonable to report on these types of interventions in terms of ‘what works’.
Many interventions have complicated aspects such as different components which all need to work effectively and together, or processes that work differently in different contexts, or which only work in combination with other programs or favourable environments. It is essential to report on these in terms of ‘what works for whom in what contexts’.
Some interventions have aspects that are intrinsically dynamic and emergent. While there is an overall goal in mind, the details of the program will unfold and change over time as different people become engaged and as it responds to new challenges and opportunities.
Effective evaluation will not involve building a detailed model of how the intervention works and calculating the optimal mix of implementation activities - because what is needed, what is possible, and what will be optimal will be always changing.
instead real-time evaluation will be needed to answer the question “What is working?” and to inform ongoing adaptation and learning.
If you would like to understand better the distinction between complicated and complex, and why this often matters for development evaluation, watch the following video by Sophie Windsor Clive & Liberty Smith about a murmuration of starlings:
This is flocking behaviour produced not by a complicated and detailed plan for each bird, but through adaptive, responsive behaviour based on a number of principles. As you watch the video, consider how we might think about development, and evaluating development, in situations where sustainable, positive outcomes will be produced through supporting the capabilities and agency of individuals rather than through implementing a centrally determined plan.
Are you exploring ways of addressing complexity in evaluation? What ideas and methods are you using? What resources have you found useful?
Register or login to add your comments below.
Resources
The following resources provide guidance for addressing complexity in evaluation.
Exploring the science of complexity: Ideas and implications for development and humanitarian efforts
The paper details each of the 10 concepts of complexity science, using real world examples where possible. It then examines the implications of each concept for those working in the aid world. Here, we list the 10 concepts for reference, using the next section of this summary to suggest some overall implications of using the concepts for work in international development and humanitarian spheres.
The ten concepts are as follows:
- Interconnected and interdependent elements and dimensions;
- Feedback processes promote and inhibit change within systems;
- System characteristics and behaviours emerge from simple rules of interaction;
- Nonlinearity;
- Sensitivity to initial conditions;
- Phase space – the ‘space of the possible’;
- Attractors, chaos and the ‘edge of chaos’;
- Adaptive agents;
- Self-organisation;
-
Co-evolution
Source: Ramalingam, B., & Jones, H., with Reba, T., & Young, J. (2008) Exploring the science of complexity: Ideas and implications for development and humanitarian efforts. Overseas Development Institute
Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation & Use
In this book, Michael Quinn Patton describes the process of conducting developmental evaluations for ongoing program development.
The book includes the following ten sections and makes use of case examples and stories, cartoons, sidebars, and summary tables throughout:
- Developmental Evaluation defined and positioned
- Developmental Evaluation as a distinct purpose and niche
- Thinking outside evaluation's boxes
- Situation recognition and responsiveness: Distinguishing simple, complicated, and complex
- Systems thinking and complexity concepts for Developmental Evaluation
- How the world is changed: A dialectic with thesis and antithesis and Developmental Evaluation as the synthesis
- The adaptive cycle and Developmental Evaluation
- Developmental Evaluation inquiry frameworks
- Developmental Evaluation bricolage: Reflective practice, sensitizing concepts, action research, abduction, systems change
- Utilization-Focused Developmental Evaluation: Engagement practices, diverse designs, and adaptive options
Source: Patton, M. (2010). Developmental evaluation applying complexity concepts to enhance innovation and use.
Complex adaptive systems: A different way of thinking about Health care systems
Looking at how complexity science could be used in health systems which are characterised by nonlinear dynamics and emergent properties arising from diverse populations of individuals interacting with each other and which are capable of undergoing spontaneous self-organisation.
Source: Sibthorpe, B., & Glasgow, N., and Longstaff, D. (2004) Complex adaptive systems: A different way of thinking about Health care systems. The Australian National University.
Evaluating Performance in a CAS
Describes key features of a complex adaptive system and what these might mean for evaluation and the role of an evaluator.
Source: Eoyang, G. & Berkas, T. (1999) Evaluating Performance in a CAS. Human System Dynamics Institute.
Evaluating the Complex
Discussion of different aspects of complexity and cases of evaluations with particular attention to issues of causal attribution and contribution.
This book contains the following 12 sections:
- Introduction - Kim Forss and Robert Schwartz
- Implications of Complicated and Complex Characteristics for Key Tasks in Evaluation - Patricia J. Rogers
- Contribution Analysis: Addressing Cause and Effect - John Mayne
- Micro, Meso, and Macro Dimensions of Change: A New Agenda for the Evaluation of Structural Policies - Mita Marra
- Coping with the Evaluability Barrier: Poverty Impact of European Support at Country Level - Jacques Toulemonde, Douglas Carpenter, and Laurent Raffier
- Monitoring and Evaluation of a Multi-Agency Response to Homelessness: An Australian Case Study - Peter Wilkins
- Evaluating a Complex Policy in a Complex Context: The Elusive Success of the Swiss Smoking Prevention Policy - Markus Spinatsch
- Intervention Path Contribution Analysis (IPCA) for Complex Strategy Evaluation: Evaluating the Smoke-Free Ontario Strategy - Robert Schwartz and John Garcia
- Responding to a Global Emergency and Evaluating That Response - The Case of HIV/AIDS - Kim Forss
- Evaluating Complex Strategic Development Interventions: The Challenge of Child Labor - Burt Perrin and Peter Wichmand
- Challenges in Impact Evaluation of Development Interventions: Randomized Experiments and Complexity - Jos Vaessen
- Some Insights from Complexity Science for the Evaluation of Complex Policies - Mita Marra
Source: Edited by Forss, K., & Marra, M., & Schwartz, R. (2011) Evaluating the Complex. Attribution, Contribution and Beyond.
Evaluation for equity and the fostering of human rights, as part of achieving meaningful development results, often occurs in complex adaptive systems. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Complex environments for social interventions and innovations are those in which what needs to be done to solve problems is uncertain, where key stakeholders are in conflict about how to proceed. What has worked in one place may not work in another. Context matters. Variations in culture, politics, resources, capacity, and history will affect how development initiatives unfold and how attention to equity and human rights is incorporated into those initiatives. In such situations, informed by systems thinking and a sensitivity to complex nonlinear dynamics, developmental evaluation supports increased effectiveness of interventions, social innovation, adaptive management, and ongoing learning.
Source: Patton, M. On-line e-Learning programme on: Equity-focused Evaluations. Module 2: Methodological approaches relevant to Equity-focused evaluations. Unit: Applying Complexity Concepts to Enhance Innovation and Use.
Image credit (top): Starling Roost by oldbilluk on Flickr
'52 weeks of BetterEvaluation: Week 9: Addressing complexity' is referenced in:
Blog
Theme