Does evaluation need to be done differently to support adaptive management?

Person climbing a cliff

Adaptive management is usually understood to refer to an iterative process of reviewing and making changes to programmes and projects throughout implementation.

Commonly associated with environment and resource management, it's becoming more common in other areas of program management and development. Over the next few weeks, we'll be focusing on the increasing interest in how monitoring and evaluation can support adaptive management. 

This blog starts a process of exploring this issue. We'll be continuing this focus next week with a guest blog from Fred Carden and Arnaldo Pellini, in which they share what they learned about adaptive management in a major project on developing capacity for evidence-based policy.

One of our objectives for this Adaptive Management series is to revise the Decide Purpose task page in BetterEvaluation's Rainbow Framework, and perhaps add a new option of 'Support adaptive management".  To do this we're looking to learn from your experience. We've posed a number of questions throughout this piece and at the end of the blog and would love to hear your thoughts.

We're currently exploring new ways of working with BetterEvaluation members and the evaluation community to co-create and share knowledge. If you'd like to be part of this, please click the link at the end of the blog to connect with us and tell us a bit about your experiences or questions.   And of course we welcome comments directly on the blog page too.

Using evaluation to support adaptive management

There have been a number of important projects on this issue, such as:

There has been a flurry of recent blogs on this topic, including:

This work has made me think about a number of implications for monitoring and evaluation - in particular, when is it done, why is it done (and for whom), and how is it done.

1. When is evaluation done? Throughout the project cycle, not just at the end

While the BetterEvaluation platform is intended to encompass all types of evaluative activity (before, during and after implementation), I am constantly surprised by people whose concept of evaluation is narrowly restricted to producing a single evaluation report at the end of a project.

For example, a recent paper on Impact investing in the Stanford Social Review showed a typical example of evaluation being at the end of the project cycle, with only monitoring during implementation.

Only using the term ‘evaluation’ for what comes at the end is problematic, given that monitoring is usually meant in terms of checking compliance with performance indicator targets. 

It’s far better to show evaluation going on throughout the project cycle, including during implementation, as this example from Network of International Development Organisations in Scotland (NIDOS) shows. 

 

However, this still limits the evaluation during implementation to mid-term evaluations, which have limited scope for informing ongoing adaptation which is at the heart of adaptive management.

I wonder how can we clearly include in the definition of and planning for evaluation these smaller, iterative studies that don’t fit into the way evaluation is so often defined (often understood as few in number, large, externally conducted and independent, and focused on impact) or monitoring (often understood as tracking performance across some key indicators identified in advance)?  Would it be helpful to refer to ‘episodes of evaluation’, or ‘evaluative inquiry’, or ‘reality-testing’ – which is a small scale, iterative process Michael Patton recommends before framing a formal evaluation? Do we need a new term (and therefore a new type of evaluation) or just to be clear that the term ‘evaluation’ includes these smaller efforts during implementation?

2. Why is evaluation done and for whom?  Informing different levels of learning and adaptation by different people

Doing evaluation during implementation provides an opportunity to make changes to implementation using this information (depending on the authorising environment, which is also explicitly addressed in discussions about adaptive management).  But these changes can be at different levels and undertaken by (and authorised by) different people.

One level of adaptation refers to doing the same things but doing them better – more completely, or more on time.  Another level refers to tweaking implementation, maybe even trying some different activities to achieve the same objectives.

These are the levels of adaptation suggested in this diagram from the Tasmanian Parks and Wildlife Service.

 

Even better is the following diagram from the Californian Department of Fish and Wildlife, which shows a range of adaptations, including changing the objectives and even the understanding of the problem or situation being addressed.  They describe an adaptive management approach as providing "a structured process that allows for taking action under uncertain conditions based on the best available science, closely monitoring and evaluating outcomes, and re-evaluating and adjusting decisions as more information is learned.”

 

Most importantly, adaptive management can involve much quicker cycles within a project.

I find particularly helpful the following table by Andrews, Pritchett and Woolcock (2012) which contrasts PDIA with the way mainstream development and traditional planning of any project emphasises upfront planning before acting,  then checking compliance and finally whether it worked rather than informing ongoing change.

Table 1: Contrasting current approaches and PDIA

Elements of Approach

Mainstream Development 
Projects/Policies/Programs

Problem Driven Iterative Adaption

What drives action?

Externally nominated problems or ‘solutions’ in which deviation from ‘best practices’ forms is itself defined as the problem

Locally Problem Driven – looking to solve particular problems

Planning for action?

Lots of advance planning, articulating a plan of action, with implementation regarded as following the planned script

‘Muddling through’ with the authorization of positive deviance and a purposive crawl of the available design space

Feedback loops

Monitoring (short loops, focused on disbursement and process compliance) and Evaluation (long feedback loop on outputs, maybe outcomes)

Tight feedback loops based on the problem and experimentation with information loops integrated with decisions

Plans for scaling up
and diffusion of learning

Top-down – the head learns and leads, the rest listen and follow

Diffusion of feasible practice across organizations and communities of practitioners

Source: Escaping Capability Traps through Problem Driven Iterative Adaptation (PDIA)

So how is adaptive management different to normal management?  And how is evaluation to support adaptive management different to evaluation to support learning and accountability?

In an ideal world, the concept of ‘management’ would include all levels of adaptive management – from taking steps to improve the quality of implementation to changing what implementation is intended, or even what the objectives are.  And the concept of ‘evaluation to support learning’ would include all levels of learning – from providing information about compliance with plans to providing information about the effectiveness and ongoing appropriateness of those plans, and from providing information to inform a subsequent cycle of a program to informing ongoing implementation.  But in reality, many evaluations focus on producing a single evaluation report for a specific purpose, and, while there is sometimes lip-service paid to ongoing learning, there are few processes and products produced by the evaluation to support that.

Would it be helpful to add ‘adaptive management’ as an intended use for evaluation in addition to the existing options of learning, accountability and informing decisionmaking listed on the Decide Purpose task page in the Rainbow Framework? Or should these aspects be incorporated into learning, accountability and decisionmaking?

3. How is it done? What does adaptive management mean for collecting, analysing, reporting and supporting use of data - and for managing evaluation?

Are there particular methods that are more appropriate for rapid turnaround?  What methods can be simple enough that they can be easily incorporated into routine processes? Are there implications for who should be involved in conducting evaluations (or evaluative episodes) and for governance and control? What does this imply for the management of evaluations, for the preparation of evaluators and for the development of evaluative competencies among other staff?

Let’s continue the conversation

How relevant are these ideas for your work?

How different are they from what you already do?

What are some challenges in doing evaluation in ways that support adaptive management?  How can they be overcome?

Are there good examples of evaluation for adaptive management we can learn from? Or guidance?

We’d love to hear your thoughts on the questions posed above in the comments below. And if you'd like to be involved in the discussion further and help with the development of an Adaptive Management Option page, please register your interest and let us know what examples, advice, resources or questions you'd like to share.

 

'Does evaluation need to be done differently to support adaptive management?' is referenced in: