What do we need for better monitoring?

Photo of spice market with Better Monitoring logo overlaid

This blog by Jo Hall and Patricia Rogers provides an update on the Global Partnership for Better Monitoring project.

Earlier in 2021, we published a blog asking for your input into a project to improve our collective understanding and practice of the monitoring function. This project (the Global Partnership for Better Monitoring, or, 'BetterMonitoring'), is being run in partnership with UNICEF and others and is now approaching the close of Phase 1. As this phase draws to a close, we're happy to be able to provide an update.

In September, we invited input on the primary issues and challenges in monitoring, useful resources and tools, and feedback on a draft framework of monitoring tasks. As we said at the time, although effective monitoring is essential for managing performance, monitoring is often undervalued and understood quite narrowly. While the term "Monitoring and Evaluation" (M&E) is widely used, especially in international development, the monitoring function has not always benefited from the same level of investment, professionalisation and systems strengthening as the evaluation function. Instead, the monitoring function is often relegated to or viewed as a lower-level, technical function. This comes at the expense of really using monitoring to manage performance and maximise impact.

For reference, we use the word 'monitoring' in its broadest sense to refer to any process to periodically collect, analyse and use information to actively manage performance to maximise positive impact.

Input on the primary issues and challenges in monitoring

Below, we present some of the key messages from the survey responses about challenges in monitoring, supplemented by extracts from the open discussion on Peregrine:

Monitoring needs to better support effective management and implementation

Silva Ferretti reminded us that monitoring needs to do more than monitor a pre-established course of action and argued that much monitoring misses "the basics":

Of course some deeper, specialised monitoring functions are extremely helpful when working on complex change. But we are missing the basics: ensuring that monitoring is for everyone, is shared, and feeds continuously our actions and relations.

Ian Davies also pointed to the importance of relationships for effective management and challenged the concept of "managing performance":

Finally, you don't "manage performance". Performance is a construct. Virtual management begets virtual reality while those we disenfranchise live in reality. You manage by being actively involved and participating in the inter-relationships between messy human beings and their environments.

Ian Davies  also discussed how monitoring and evaluation need to be done in ways that don't undermine management and implementation (including in terms of diverting resources, time or focus):

In that sense, I agree with you that "monitoring is undervalued" however, in reality it is management and its critical function that are taken for granted and ignored…The Evaluation industry keeps harping on about the imperative of so-called evaluation and/or M&E capacity building (and now I see we're trying to ramp up the M part) with the underlying assumption that management capacity (which arguably you need if you expect evaluation to translate into action) is there and sufficient enough to allow for allocating scarce resources elsewhere.

Jo Hall (one of the researchers on the BetterMonitoring project) responded to this, agreeing that monitoring is often not done in ways that effectively support managers and management:

I agree that 'better management' is missing from many of these discussions and I have been struggling with trying to articulate the integration of monitoring and management functions. I thankfully have met several great managers over the years (international development is my field) - my frustration is they are great (for the reasons you mention) in spite of and not because of the bureaucratic systems they have to deal with - which often includes not particularly useful or meaningful M&E systems.

Scott Chaplowe suggested that monitoring and evaluation needed to be integrated into management processes and development;

I recall about a decade ago, after the Tsunami Recovery Program with the American Red Cross, there was an effort to stop separate, siloed "M&E trainings" and instead weave them right into the overall project management trainings. I support such capacity development initiatives. M&E should not silo itself because monitoring service delivery and evaluative thinking are very much functions shared across an organisation and program areas.

Anuj Jain suggested a distinction between managers and leaders and hence the types of monitoring needed:

[Monitoring needs managers and leaders] who are truly able to understand complex operating realities and steer the program design/ interventions in a manner that make more contextual sense; as ground realities change constantly. More flexible the design is, more likely it is to achieve change in the hands of competent management/ leadership, where learning (what monitoring should be) and stakeholders analysis is able to inform on an on-going. Too many monitoring frames and tools / system even in largish programs are often too rigid and tools oriented, and by the time the learning emerges from those, the implementation ship often has already sailed.

Management is not just done by managers; monitoring is not just done by monitoring specialists

Bob Williams discussed the value of thinking about management, monitoring and evaluation as processes rather than focusing on them as activities of those only in particular roles:

Management is not a person or a position it is a process of resource allocation (in the broadest sense of the word resources). We all manage, we are all managers of resources. Some of us are better at it than others but we all do it.

Monitoring is a process of keeping an eye on what is going on. We all do it. Parents monitor their kids, I monitor my garden. Some of us do that better than others.

Evaluation is not a person or a product. It is a process of making judgments about the appropriate use of resources. We all do this. Some of us are better at that process than others, but we all do it

New structures might be helpful for monitoring

Anuj Jain suggested establishing a 'learning partnership' with an external team, building in an ongoing systematic learning system, and opportunities to take stock and enact course correction:

The external team must have a mix of experience and skills in the subject matter of the project, as well as facilitation skills for generating learning (not to say, navigating the politics of project implementation). It is not an easy mix to find, but absolutely gratifying for both internal and external teams when the learning is not seen as a critique, and we all realise that errors are inevitable no matter how hard the implementation team has tried. We have also tried to have a 'capacity building' element in this learning partnership, where specific recommendations are identified to strengthen management/ leadership capacities. Let me just say, this differs hugely from an external third party M&E contract - both in letter and spirit. 

Monitoring is not only done for managers

Scott Chaplowe reminded us that, while monitoring is important for management and implementation, it should not only be done for or controlled by managers but also include intended recipients:

It is important to expand our thinking beyond conventional project/program "management" to include intended service recipients in the M&E…. M&E also often includes project/program design, and it is here that we also need to engage with the people we seek to serve with a seat at the table so they can decide if and how to design those interventions that will be monitored and evaluated.

Leslie Fox also raised the issue of using monitoring and evaluation to inform the allocation of resources was also raised:

The allocation of resources as distinct from their management, monitoring and evaluation is a decision or judgement about how any societal entity prioritises its interests or the results it wants to achieve and then manages these resources to achieve its chosen ends.

We need to learn from good examples of monitoring

Seetharam Mukkavilli shared experiences of working in an organisation where monitoring was valued and well understood:

In the international NGO where I worked that operates world-wide, at the time of my work there – many years ago monitoring was robust and organisation wide. … Every country office implementing programs had at least one monitoring professional ably supported by field area managers providing valuable data as per the corporate monitoring system's requirements. Most beautiful part of it was that results information can be obtained for different interventions quickly using computerised data.

The distinction between monitoring and evaluation is becoming more blurred

Scott Chaplowe discussed the increasing need for more monitoring to provide timely information given that disruption is the new norm:

We need monitoring as evaluation to determine and respond quickly to whether planned results are occurring as intended and support adaptive management. But just as importantly, we need to think outside the project box and pursue context monitoring to interrogate any unplanned results on the larger human and natural ecosystem so we can respond in a timely fashion, whether it is to capitalise on opportunity (if a positive unintended outcome) or mitigate damage (if a negative unintended consequence).

Input on the specific proposed tasks for monitoring

We also received very helpful responses on the specific proposed tasks for monitoring through the survey. Some of the key points arising were:

  • The close connections between monitoring and evaluation
  • A need for comprehensive systems that deal with a hierarchy of M&E (i.e. not project-centred). Alignment and harmonisation are important principles.
  • Monitoring should address unintended consequences as well as intended results.
  • Avoid driftnet monitoring and accurately target monitoring (but also be open to unexpected findings)
  • Analysis of monitoring data tends to be very underdone
  • Monitoring is important for fast feedback. The question is not about frequency, but the timeliness of information
  • Theories of change need to evolve, informed iteratively by monitoring information
  • Focus on learning is important

Some respondents were keen to answer evaluative and causal questions in monitoring, to understand why a particular result is being seen (at any stage of the results chain), while other felt monitoring should be confined to describing activities and processes.

What we have done with your input and feedback

As a result of this consultation, we decided that it was better to expand the existing BetterEvaluation Rainbow Framework to incorporate the monitoring function more clearly. While there were some differences of opinion in the survey responses, we decided that a standalone framework for monitoring would not be the best approach. This is because there is much overlap in tasks, methods and approaches, and the fact that there are close links between monitoring and evaluation. In developing this draft of a more integrated Rainbow Framework, we considered the feedback and incorporated many suggestions from the survey responses and discussion.

One major aspect of this aims to make the Rainbow Framework clearer in how it deals with analysis to answer different types of questions: descriptive, causal, evaluative, predictive and action. Each of these types of questions requires a different kind of thinking, with associated different methods and tools.

You can find this new draft framework below. This isn’t a finished product – there’s still lots to do in terms of thinking through the structure and integration with the BetterEvaluation website and exploring what methods and processes will be useful for each of the identified tasks. We’re looking forward to continuing this work in 2022.

Competencies for the monitoring function

We've also used your feedback to inform our work on competencies for monitoring, with particular emphasis on the links between monitoring and management and bearing in mind that people in various roles undertake monitoring and management activities.

You can find a preliminary version of the key competencies for the monitoring function below. We welcome your reactions and feedback over the coming months.

Theme Page on Monitoring

We have also developed a thematic page for Monitoring that provides an overview of the monitoring function and links to existing and new resources on the BetterEvaluation website. This theme page was also helpfully informed by your feedback.

Thank You!

We want to extend a huge thanks to everyone that provided feedback. We would also like to thank Anthea Moore, Emergency Specialist and Joseph Barnes, Chief, Monitoring at UNICEF, for their support and collaborative engagement with us on this tricky topic of monitoring over the past few months.

We invite you to continue to engage with this work in 2022. You can follow progress on this page.