Doing Development Differently means doing monitoring, evaluation & learning differently too

Adaptive%20Management%20-%20Doing%20MEL%20differently_0.png

This week, Arnaldo Pellini (Senior Research Fellow, Overseas Development Institute and Lead for Learning at the Knowledge Sector Initiative, Indonesia) and Louise Shaxson (Senior Research Fellow, Overseas Development Institute) reflect on some of the challenges around monitoring, evaluating and learning (MEL) from adaptive programmes.

One of our objectives for this Adaptive Management series is to revise the Decide Purpose task page in BetterEvaluation's Rainbow Framework, and perhaps add a new method of 'Support adaptive management" and we're looking to learn from your experience on this. If you'd like to be part of this process to co-create and share knowledge, please click the link at the end of the blog to connect with us. And of course, we welcome comments directly on the blog page too - we've posed a number of questions at the end of the blog and would love to hear your thoughts on these. 

Recently, we attended a two-day workshop on ‘Implementing the New Development Agenda: Doing Development Differently (DDD), Thinking and Working Politically (TWP) and Problem Driven Iterative Adaption (PDIA).’ The event was co-organised by the Knowledge Sector Initiative, the KOMPAK programme, and The World Bank in Indonesia and attended by practitioners, researchers, government and other partners – and it was great to see how the debate is becoming more mainstreamed and nuanced. With the benefits of adaptive programming firmly accepted (at least by those in attendance), workshop sessions provided space to explore more in-depth questions, such as, how can you monitor, evaluate and learn (MEL) from adaptive programmes?

MEL is particularly important for the quick feedback loops needed to inform adaptive programming. The session discussion was really interesting, and it looks like there may be an emerging DDD MEL community of practice. Broadly speaking the discussion identified two main challenges and drew out experiences that show how these can be overcome:

1. The need for an integrated MEL team that encourages a culture of learning

Doing Development Differently requires programme teams to contribute actively and explicitly to programme learning . For example, by providing feedback and reflection in meetings or after action reviews. This should help to inform programme decision-making. However, programme teams often do not have the necessary skills to do this – or they do not see it as a part of their day-to-day work, and don’t make time for it.

For too many development programmes, the monitoring and evaluation (and only sometimes learning!) unit is staffed with M&E experts who specialize in meeting donor accountability reporting requirements. They tend to be separate from the work of the programme teams, and from day-to-day interaction with partners. They develop their own plans for monitoring activities and rarely involve the programme team and partners.  What’s more, the M&E manager is often not part of the programme’s senior management team.

Some solutions we identified:

  • Programme team leaders need to invest in developing different learning capacities within teams.

Programme officers’ job descriptions should include monitoring and learning tasks, as well as soft skills such as facilitation to enable better participation in programme design and activities.

  • The programme’s senior management team should have strategies in place to develop a learning culture within their teams.

Learning cannot be imposed on the staff. It has to emerge in a participatory way. Examples include rewarding writing and publications, creating space for open discussion about what works and what does not, and generating evidence that being involved in monitoring and learning actually helps to improve your day-to-day work with partners.

  • MEL teams and units should be set up only if they are really needed and add value.

If the MEL work done by the programme teams is sufficient, then there is no need to have a dedicated MEL team. Where they are necessary (for instance in larger programmes), greater collaboration between the MEL and programme teams can be fostered by creating space for reflection and sharing that has a clear purpose. For example, the purpose may be to inform programme staff about decisions, validate evidence of progress, or to reinforce the programme goals and approach. Another way, would be to involve MEL staff in programme activities, such as designing prototypes.

  • All this requires a flexible environment that provides a real delegation of authority and decision-making within teams.

2. The need for MEL frameworks that are fit for purpose (to enable learning and adaption, rather than only for accountability and milestone reporting)

MEL frameworks and approaches are often produced too late in the programme cycle, and are often overcomplicated.  There are two reasons for this. First, there is sometimes a misunderstanding about the expectations of the funder. Second, MEL plans are often a key deliverable that triggers a milestone payment to the contractor and therefore need a lot of detail.

Another problem is that adaptive programmes – especially in the field of social change and policy innovation – may implement several pilots at the same time. Monitoring and learning from them results in rich case studies and stories of change. The reporting to the funder is often in an aggregated form, which loses important information about patterns and differences. When the information becomes too generalised, it is less useful to inform the funder’s investment decisions.

Some of the solutions identified were:

  • Programme leadership could take the initiative to design reporting processes that start from what is most useful for the programme- and discussing with the donor changes to reporting requirements.

For example, the case could be made for annual progress and learning reports, with very brief highlight updates on a quarterly or six-monthly basis.

  • MEL systems have to be fit for purpose but do not “over-engineer” the MEL framework and approach.

Start small. Be adaptive and test a few simple tools and questions that help you learn. Work with funders to select tools and processes that work for the programme. And invest in team capacities and capabilities that really help to inform an adaptive approach.

  • MEL teams have to be integrated into the work of the programme.

Much of the above discussion has focused on the monitoring aspects of MEL – ongoing gathering and reporting of data.  Which leads to the question - is there a role for evaluation (discrete studies) to support adaptive programming?  And if so, what might they look like?  But that’s a discussion for another blog (including how we define evaluation as compared to monitoring).

 

Let’s continue the conversation

  • How relevant are these ideas for your work?
  • How different are they from what you already do?
  • What are some challenges in doing evaluation in ways that support adaptive management?  How can they be overcome?
  • Are there good examples of evaluation for adaptive management we can learn from? Or guidance?

We’d love to hear your thoughts on the questions posed above in the comments below. And if you'd like to be involved in the discussion further and help with the development of an Adaptive Management Method page, please register your interest and let us know what examples, advice, resources or questions you'd like to share.

'Doing Development Differently means doing monitoring, evaluation & learning differently too' is referenced in: