Does evaluation need to be done differently to support adaptive management?

Patricia Rogers's picture 15th March 2017 by Patricia Rogers

Adaptive management is usually understood to refer to an iterative process of reviewing and making changes to programmes and projects throughout implementation. Commonly associated with environment and resource management, it's becoming more common in other areas of program management and development. Over the next few weeks, we'll be focusing on the increasing interest in how monitoring and evaluation can support adaptive management. 

This blog starts a process of exploring this issue. We'll be continuing this focus next week with a guest blog from Fred Carden and Arnaldo Pellini, in which they share what they learned about adaptive management in a major project on developing capacity for evidence-based policy.

One of our objectives for this Adaptive Management series is to revise the Decide Purpose task page in BetterEvaluation's Rainbow Framework, and perhaps add a new option of 'Support adaptive management".  To do this we're looking to learn from your experience. We've posed a number of questions throughout this piece and at the end of the blog and would love to hear your thoughts.

We're currently exploring new ways of working with BetterEvaluation members and the evaluation community to co-create and share knowledge. If you'd like to be part of this, please click the link at the end of the blog to connect with us and tell us a bit about your experiences or questions.   And of course we welcome comments directly on the blog page too.

Using evaluation to support adaptive management

There have been a number of important projects on this issue, such as:

There has been a flurry of recent blogs on this topic, including:

This work has made me think about a number of implications for monitoring and evaluation - in particular, when is it done, why is it done (and for whom), and how is it done.

  1. When is evaluation done? Throughout the project cycle, not just at the end

While the BetterEvaluation platform is intended to encompass all types of evaluative activity (before, during and after implementation), I am constantly surprised by people whose concept of evaluation is narrowly restricted to producing a single evaluation report at the end of a project.

For example, a recent paper on Impact investing in the Stanford Social Review showed a typical example of evaluation being at the end of the project cycle, with only monitoring during implementation.

Only using the term ‘evaluation’ for what comes at the end is problematic, given that monitoring is usually meant in terms of checking compliance with performance indicator targets. 

It’s far better to show evaluation going on throughout the project cycle, including during implementation, as this example from Network of International Development Organisations in Scotland (NIDOS) shows. 

However, this still limits the evaluation during implementation to mid-term evaluations, which have limited scope for informing ongoing adaptation which is at the heart of adaptive management.

I wonder how can we clearly include in the definition of and planning for evaluation these smaller, iterative studies that don’t fit into the way evaluation is so often defined (often understood as few in number, large, externally conducted and independent, and focused on impact) or monitoring (often understood as tracking performance across some key indicators identified in advance)?  Would it be helpful to refer to ‘episodes of evaluation’, or ‘evaluative inquiry’, or ‘reality-testing’ – which is a small scale, iterative process Michael Patton recommends before framing a formal evaluation? Do we need a new term (and therefore a new type of evaluation) or just to be clear that the term ‘evaluation’ includes these smaller efforts during implementation?

  1.  Why is evaluation done and for whom?  Informing different levels of learning and adaptation by different people

Doing evaluation during implementation provides an opportunity to make changes to implementation using this information (depending on the authorising environment, which is also explicitly addressed in discussions about adaptive management).  But these changes can be at different levels and undertaken by (and authorised by) different people.

One level of adaptation refers to doing the same things but doing them better – more completely, or more on time.  Another level refers to tweaking implementation, maybe even trying some different activities to achieve the same objectives.

These are the levels of adaptation suggested in this diagram from the Tasmanian Parks and Wildlife Service.

Even better is the following diagram from the Californian Department of Fish and Wildlife, which shows a range of adaptations, including changing the objectives and even the understanding of the problem or situation being addressed.  They describe an adaptive management approach as providing "a structured process that allows for taking action under uncertain conditions based on the best available science, closely monitoring and evaluating outcomes, and re-evaluating and adjusting decisions as more information is learned.”

Most importantly, adaptive management can involve much quicker cycles within a project.

I find particularly helpful the following table by Andrews, Pritchett and Woolcock (2012) which contrasts PDIA with the way mainstream development and traditional planning of any project emphasises upfront planning before acting,  then checking compliance and finally whether it worked rather than informing ongoing change.

Table 1: Contrasting current approaches and PDIA

Elements of Approach

Mainstream Development 
Projects/Policies/Programs

Problem Driven Iterative Adaption

What drives action?

Externally nominated problems or ‘solutions’ in which deviation from ‘best practices’ forms is itself defined as the problem

Locally Problem Driven – looking to solve particular problems

Planning for action?

Lots of advance planning, articulating a plan of action, with implementation regarded as following the planned script

‘Muddling through’ with the authorization of positive deviance and a purposive crawl of the available design space

Feedback loops

Monitoring (short loops, focused on disbursement and process compliance) and Evaluation (long feedback loop on outputs, maybe outcomes)

Tight feedback loops based on the problem and experimentation with information loops integrated with decisions

Plans for scaling up
and diffusion of learning

Top-down – the head learns and leads, the rest listen and follow

Diffusion of feasible practice across organizations and communities of practitioners

Source: Escaping Capability Traps through Problem Driven Iterative Adaptation (PDIA)

So how is adaptive management different to normal management?  And how is evaluation to support adaptive management different to evaluation to support learning and accountability?

In an ideal world, the concept of ‘management’ would include all levels of adaptive management – from taking steps to improve the quality of implementation to changing what implementation is intended, or even what the objectives are.  And the concept of ‘evaluation to support learning’ would include all levels of learning – from providing information about compliance with plans to providing information about the effectiveness and ongoing appropriateness of those plans, and from providing information to inform a subsequent cycle of a program to informing ongoing implementation.  But in reality, many evaluations focus on producing a single evaluation report for a specific purpose, and, while there is sometimes lip-service paid to ongoing learning, there are few processes and products produced by the evaluation to support that.

Would it be helpful to add ‘adaptive management’ as an intended use for evaluation in addition to the existing options of learning, accountability and informing decisionmaking listed on the Decide Purpose task page in the Rainbow Framework? Or should these aspects be incorporated into learning, accountability and decisionmaking?

  1. How is it done? What does adaptive management mean for collecting, analysing, reporting and supporting use of data - and for managing evaluation?

Are there particular methods that are more appropriate for rapid turnaround?  What methods can be simple enough that they can be easily incorporated into routine processes? Are there implications for who should be involved in conducting evaluations (or evaluative episodes) and for governance and control? What does this imply for the management of evaluations, for the preparation of evaluators and for the development of evaluative competencies among other staff?

Let’s continue the conversation

How relevant are these ideas for your work?

How different are they from what you already do?

What are some challenges in doing evaluation in ways that support adaptive management?  How can they be overcome?

Are there good examples of evaluation for adaptive management we can learn from? Or guidance?

We’d love to hear your thoughts on the questions posed above in the comments below. And if you'd like to be involved in the discussion further and help with the development of an Adaptive Management Option page, please register your interest and let us know what examples, advice, resources or questions you'd like to share.

A special thanks to this page's contributors
Author
Director of BetterEvaluation/ Professor of Public Sector Evaluation, Australia and New Zealand School of Government.
Melbourne.
Author
BetterEvaluation Website Coordinator, BetterEvaluation and ANZSOG.
Melbourne, Australia.

Comments

cleisher's picture
Craig Leisher

My organisation works in rural areas of developing countries, where people and nature are interdependent and changes in one causes change in the other. In complex social-ecological systems, there is a large incentive to use adaptive management (AM), yet conservation organisations rarely do so. There are several reasons. First, it is often unclear if an alternative strategy is better or not. Second, path dependence of donors and staff create inertia to continue with the status quo (see Game et al. 2013 for more on AM in conservation). One solution is to make better use of technology. The failure point in most conservation projects is social rather than financial or ecological. To understand the social side, we collect mobile phone numbers during the baseline household survey and text people key questions about project activities periodically. They’re ‘our canaries in the coal mine’ who give almost real-time warnings about social issues that could impact the project. We know the location and demographics of the respondents, so we can pinpoint people who perceive a negative impact from a project activity and work to fix the issue before it become a strategy killer. It’s a short-loop, incremental AM approach that’s cheap to implement and can be largely automated.

Patricia Rogers's picture
Patricia Rogers

Thanks for sharing this example, Craig.  Getting real time data from local residents is particularly important for supporting adaptive management.

John Dalton's picture
John C Dalton

First of all, hats off to the authors! Well done. Adaptive "anything" in development terms is a high risk venture. The donor may squeal, the staff may scream, the home office may faint and start firing e mails...High risk! So, the most important features of adaptive management is for the manager to be experienced and empowered. When you are heading over the cliff it won't do for a series of discussions that consume lots of time and do not save the ship or its occupants. Craig (above) is correct that there may be no direct evidence that the adaptation option is better (I don't think the whole project strategy could be adapted in toto since that would mean you were doing something almost totally different but certainly the implementation strategy that was being used could be adapted). Finally, once all the feathers have been plucked form the original chicken are we evaluating the chicken, the plucked chicken, the decision to pluck the chicken.... or the feathers?  

Patricia Rogers's picture
Patricia Rogers

I agree that it's important to be able to identify when adaptive management is likely to be appropriate - and that would probably be a combination of when it is needed (can't predict in advance what should be done) and also when it is possible (have high level support for changing direction if needed, and have access to the types of information needed to inform these changes). 

Jindra Cekan's picture
Jindra Cekan, PhD

Patricia - great work! Is there clear guidance somewhere on how to structure good quality evaluative feedback from the oft-cited 'feedback loops' into adaptive management?  Ideally they'd be dual directional, not just extractive, valuing the voices of participants and partners and moving projects towards sustained impact!

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.