What do we need for better monitoring?

Jo Hall's picture 29th November 2021 by Jo Hall

This blog by Jo Hall and Patricia Rogers provides an update on the Global Partnership for Better Monitoring project.

Earlier this year, we published a blog asking for your input into a project to improve our collective understanding and practice of the monitoring function. This project (the Global Partnership for Better Monitoring, or, 'BetterMonitoring'), is being run in partnership with UNICEF and others and is now approaching the close of Phase 1. As this phase draws to a close, we're happy to be able to provide an update.

In September, we invited input on the primary issues and challenges in monitoring, useful resources and tools, and feedback on a draft framework of monitoring tasks. As we said at the time, although effective monitoring is essential for managing performance, monitoring is often undervalued and understood quite narrowly. While the term "Monitoring and Evaluation" (M&E) is widely used, especially in international development, the monitoring function has not always benefited from the same level of investment, professionalisation and systems strengthening as the evaluation function. Instead, the monitoring function is often relegated to or viewed as a lower-level, technical function. This comes at the expense of really using monitoring to manage performance and maximise impact.

For reference, we use the word 'monitoring' in its broadest sense to refer to any process to periodically collect, analyse and use information to actively manage performance to maximise positive impact.

Input on the primary issues and challenges in monitoring

Below, we present some of the key messages from the survey responses about challenges in monitoring, supplemented by extracts from the open discussion on Peregrine:

Monitoring needs to better support effective management and implementation

Silva Ferretti reminded us that monitoring needs to do more than monitor a pre-established course of action and argued that much monitoring misses "the basics":

Of course some deeper, specialised monitoring functions are extremely helpful when working on complex change. But we are missing the basics: ensuring that monitoring is for everyone, is shared, and feeds continuously our actions and relations.

Ian Davies also pointed to the importance of relationships for effective management and challenged the concept of "managing performance":

Finally, you don't "manage performance". Performance is a construct. Virtual management begets virtual reality while those we disenfranchise live in reality. You manage by being actively involved and participating in the inter-relationships between messy human beings and their environments.

Ian Davies  also discussed how monitoring and evaluation need to be done in ways that don't undermine management and implementation (including in terms of diverting resources, time or focus):

In that sense, I agree with you that "monitoring is undervalued" however, in reality it is management and its critical function that are taken for granted and ignored…The Evaluation industry keeps harping on about the imperative of so-called evaluation and/or M&E capacity building (and now I see we're trying to ramp up the M part) with the underlying assumption that management capacity (which arguably you need if you expect evaluation to translate into action) is there and sufficient enough to allow for allocating scarce resources elsewhere.

Jo Hall (one of the researchers on the BetterMonitoring project) responded to this, agreeing that monitoring is often not done in ways that effectively support managers and management:

I agree that 'better management' is missing from many of these discussions and I have been struggling with trying to articulate the integration of monitoring and management functions. I thankfully have met several great managers over the years (international development is my field) - my frustration is they are great (for the reasons you mention) in spite of and not because of the bureaucratic systems they have to deal with - which often includes not particularly useful or meaningful M&E systems.

Scott Chaplowe suggested that monitoring and evaluation needed to be integrated into management processes and development;

I recall about a decade ago, after the Tsunami Recovery Program with the American Red Cross, there was an effort to stop separate, siloed "M&E trainings" and instead weave them right into the overall project management trainings. I support such capacity development initiatives. M&E should not silo itself because monitoring service delivery and evaluative thinking are very much functions shared across an organisation and program areas.

Anuj Jain suggested a distinction between managers and leaders and hence the types of monitoring needed:

[Monitoring needs managers and leaders] who are truly able to understand complex operating realities and steer the program design/ interventions in a manner that make more contextual sense; as ground realities change constantly. More flexible the design is, more likely it is to achieve change in the hands of competent management/ leadership, where learning (what monitoring should be) and stakeholders analysis is able to inform on an on-going. Too many monitoring frames and tools / system even in largish programs are often too rigid and tools oriented, and by the time the learning emerges from those, the implementation ship often has already sailed.

Management is not just done by managers; monitoring is not just done by monitoring specialists

Bob Williams discussed the value of thinking about management, monitoring and evaluation as processes rather than focusing on them as activities of those only in particular roles:

Management is not a person or a position it is a process of resource allocation (in the broadest sense of the word resources). We all manage, we are all managers of resources. Some of us are better at it than others but we all do it.

Monitoring is a process of keeping an eye on what is going on. We all do it. Parents monitor their kids, I monitor my garden. Some of us do that better than others.

Evaluation is not a person or a product. It is a process of making judgments about the appropriate use of resources. We all do this. Some of us are better at that process than others, but we all do it

New structures might be helpful for monitoring

Anuj Jain suggested establishing a 'learning partnership' with an external team, building in an ongoing systematic learning system, and opportunities to take stock and enact course correction:

The external team must have a mix of experience and skills in the subject matter of the project, as well as facilitation skills for generating learning (not to say, navigating the politics of project implementation). It is not an easy mix to find, but absolutely gratifying for both internal and external teams when the learning is not seen as a critique, and we all realise that errors are inevitable no matter how hard the implementation team has tried. We have also tried to have a 'capacity building' element in this learning partnership, where specific recommendations are identified to strengthen management/ leadership capacities. Let me just say, this differs hugely from an external third party M&E contract - both in letter and spirit. 

Monitoring is not only done for managers

Scott Chaplowe reminded us that, while monitoring is important for management and implementation, it should not only be done for or controlled by managers but also include intended recipients:

It is important to expand our thinking beyond conventional project/program "management" to include intended service recipients in the M&E…. M&E also often includes project/program design, and it is here that we also need to engage with the people we seek to serve with a seat at the table so they can decide if and how to design those interventions that will be monitored and evaluated.

Leslie Fox also raised the issue of using monitoring and evaluation to inform the allocation of resources was also raised:

The allocation of resources as distinct from their management, monitoring and evaluation is a decision or judgement about how any societal entity prioritises its interests or the results it wants to achieve and then manages these resources to achieve its chosen ends.

We need to learn from good examples of monitoring

Seetharam Mukkavilli shared experiences of working in an organisation where monitoring was valued and well understood:

In the international NGO where I worked that operates world-wide, at the time of my work there – many years ago monitoring was robust and organisation wide. … Every country office implementing programs had at least one monitoring professional ably supported by field area managers providing valuable data as per the corporate monitoring system's requirements. Most beautiful part of it was that results information can be obtained for different interventions quickly using computerised data.

The distinction between monitoring and evaluation is becoming more blurred

Scott Chaplowe discussed the increasing need for more monitoring to provide timely information given that disruption is the new norm:

We need monitoring as evaluation to determine and respond quickly to whether planned results are occurring as intended and support adaptive management. But just as importantly, we need to think outside the project box and pursue context monitoring to interrogate any unplanned results on the larger human and natural ecosystem so we can respond in a timely fashion, whether it is to capitalise on opportunity (if a positive unintended outcome) or mitigate damage (if a negative unintended consequence).

Input on the specific proposed tasks for monitoring

We also received very helpful responses on the specific proposed tasks for monitoring through the survey. Some of the key points arising were:

  • The close connections between monitoring and evaluation

  • A need for comprehensive systems that deal with a hierarchy of M&E (i.e. not project-centred). Alignment and harmonisation are important principles.

  • Monitoring should address unintended consequences as well as intended results.

  • Avoid driftnet monitoring and accurately target monitoring (but also be open to unexpected findings)

  • Analysis of monitoring data tends to be very underdone

  • Monitoring is important for fast feedback. The question is not about frequency, but the timeliness of information

  • Theories of change need to evolve, informed iteratively by monitoring information

  • Focus on learning is important

Some respondents were keen to answer evaluative and causal questions in monitoring, to understand why a particular result is being seen (at any stage of the results chain), while other felt monitoring should be confined to describing activities and processes.

What we have done with your input and feedback

As a result of this consultation, we decided that it was better to expand the existing BetterEvaluation Rainbow Framework to incorporate the monitoring function more clearly. While there were some differences of opinion in the survey responses, we decided that a standalone framework for monitoring would not be the best approach. This is because there is much overlap in tasks, methods and approaches, and the fact that there are close links between monitoring and evaluation. In developing this draft of a more integrated Rainbow Framework, we considered the feedback and incorporated many suggestions from the survey responses and discussion.

One major aspect of this aims to make the Rainbow Framework clearer in how it deals with analysis to answer different types of questions: descriptive, causal, evaluative, predictive and action. Each of these types of questions requires a different kind of thinking, with associated different methods and tools.

You can find this new draft framework here. This isn’t a finished product – there’s still lots to do in terms of thinking through the structure and integration with the BetterEvaluation website and exploring what methods and processes will be useful for each of the identified tasks. We’re looking forward to continuing this work in 2022.

Competencies for the monitoring function

We've also used your feedback to inform our work on competencies for monitoring, with particular emphasis on the links between monitoring and management and bearing in mind that people in various roles undertake monitoring and management activities.

You can find a preliminary version of the key competencies for the monitoring function here. We welcome your reactions and feedback over the coming months.

Theme Page on Monitoring

We have also developed a thematic Page for Monitoring that provides an overview of the monitoring function and links to existing and new resources on the BetterEvaluation website. This theme page was also helpfully informed by your feedback.

The thematic page on Monitoring can be found here.

Thank You!

We want to extend a huge thanks to everyone that provided feedback. We would also like to thank Anthea Moore, Emergency Specialist and Joseph Barnes, Chief, Monitoring at UNICEF, for their support and collaborative engagement with us on this tricky topic of monitoring over the past few months.

We invite you to continue to engage with this work in 2022. You can follow progress on this page.

A special thanks to this page's contributors
Author
Development evaluation consultant, Canberra.
Canberra, Australia.
Author
Founder and former-CEO, BetterEvaluation.
Melbourne.

Comments

Anonymous's picture
Ameen Benjamin

Thank you for this post. Interestingly, I have for the last 3 years already adapted the BetterEvaluation framework to include both Monitoring and Evaluation to guide the development of M&E systems in my organisation.

Anonymous's picture
Finbar Lillis

I have followed the work of BE for some years now - my go to network for fresh thinking on evaluation. I came here as a former community worker in the Uk working on local national and some European projects. Now tend to work on the kind of projects below - usually as an active participant / deliverer and often responsible for M/E. The results of the M exercise are very useful and timely. I have to pitch an approach this am for a M/E task/role for my outfit working as a partner on a national project with 40 or so unis and the nhs in England to open up access to degree level nursing training for healthcare support workers. 

I will be using the M/E framework to shape the pitch - and then plan to use it as a key ref point in conduct of the M/E over a year or so. I would be interested in sharing practice with others who are also planning to use the M/E principles/framework/competences this year. Having been a passive user of BE up to now I wonder if this is networking is already happening? I’d be keen to join.

Ps I need to re-register later today My login information well is out of date - passive user syndrome :)

Anonymous's picture
daniel ticehurst

Dear Jo and Patricia,

Although late in seeing this, I wanted to say how good it is to see more attention and focus on monitoring through the "bettermonitoring" initiative. A I've always seen the acronym of M&E as problematic. (and, if necessary, L and A to remind us of/prompt to why they are done).

As your blog implies, they are different in many ways - their purpose, who they are primarily for, requirements for comparative analysis and, not least, their different skills and experiences. The challenge is not so much about methodology, rather how monitoring and the information it generates is integrated into organisational structure and processes, incl decision-making at different levels. Above all, and done well, it helps ensure and re-enforce how programme and organisational teams be as accountable to and learn from those it supports as to those who fund them by helping justify the original  investment decision or go for more funding. (Sorry....) 

In this regard, I've found practices and lessons from some of the private sector companies i know about really useful points of learning, and on four fronts:

1. ensuring that the scope of monitoring includes and is integrated with financial monitoring. A mundane point but often taken for granted and doing so helps: a) avoid developing parallel financial and "non-financial" monitoring systems; and b) organisations better learn about how much their "outputs" cost and how much is actually spent on them;

2. providing opportunities for those who deliver the support to clients to periodically come together, share experiences and learn from each other. We often limit participatory approaches to exclusive enquiry in learning from and being accountable to those we suport. There are some great approaches on how to do this with staff teams, notably anecdote circles. (Rick Davies told me this is how the origins of MSC started off in Bangladesh.)

3. While understanding the reach among clients is important, you say how the scope of monitoring needs to go further and learn about how those reached have experienced some form of change. (And this includes assessing the assumptions made about how and why they can and will.) I agree. In the private sector, the most important question to answer is about behavioural change as usefully explained in GTZ's approach to logframes. In other words, to what extent have clients responded to the support on offer. What have the outputs stimulated. For example, have they used/rejected them and, if so, why, how and to what extent does this vary in place, time, type of support and client group. The "formalisation" of all this started with Kaplan and Norton's balanced scorecard, specifically the client perspective and builds on the more technocratic and top down approaches to, for example, adoption rate surveys among farmers in the 1980's and beneficiary assessments soon after by the World Bank. (Albeit with a different approach, 60 decibels have usefully picked up on this more recently.) 

4. The responsibilities for doing all this are often built into the job descriptions of those who deliver the support and those who manage them. It is seldom the case there is (need for) a stand alone "MEL" team and/or more recently a "learning partner". 

Too often, in my experience, we tend to overly complicate "MEL" and leave significant resources on the table. There's a lot of sophistry around. my guess is some of this comes from:

a) the over ambition for monitoring that seeks to measure 'higher level impacts (eg, jobs created, income increased). This often aggravates internal capacity, encourage perceptions of monitoring as a technical, not a managerial skill and divorces those responsible for monitoring from their colleagues in providing useful and timely information. This confuses and confounds M with E. "Rigorously" measuring (as a statistician would define the term) such variables are important, but I'd maintain just not by those who are responsible for monitoring; and

b) those who advertise their "MEL" prowess who are great, wonderful and have published lots on evaluation, but have limited "lived with" experience in establishing practical management focussed monitoring "systems".   

Thanks again, apologies for my lazy language and candour at times, yet I hope you find some of the above to be of some use in moving on to PHASE 2.....

Best wishes,

Daniel 

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.