Becoming aware of contradictory demands on evaluation systems

By
Marijn Faling

The following article was written Marijn Faling, Assistant Professor Evaluation and Private-sector Development at the International Institute of Social Studies, Erasmus University Rotterdam, the Netherlands.

Contradictory demands in evaluation

Evaluation systems often face seemingly contradictory demands. Different stakeholders have different expectations of evaluation. Implementers may need evaluation systems to conduct monitoring for learning, whereas donors may want to produce evidence on impact for accountability.

But did you know that learning vs accountability is just one among multiple contradictory demands on evaluation systems? Evaluation of complex, multi-actor programmes, set in complex and uncertain environments especially forms a breeding ground for contradictory demands (see the blog post ‘addressing complexity’ for a discussion on complicated vs complex interventions ). Most notably, M&E systems are required to anticipate unknown outcomes and untangle multiple related impact pathways, while remaining actionable, practicable, and cost-effective. For well-functioning M&E systems, we need to be aware of these competing demands.

Overlooking competing demands might jeopardize M&E systems

Let’s take a closer look at why awareness of competing demands is important. First, M&E systems need to meet various requirements, to preserve and safeguard their legitimacy in the eyes of various stakeholders. Second, if overlooked, competing demands may result in tensions for evaluators and stakeholders. This obviously complicates the daily work of evaluation professionals. But more importantly, a lack of clarity and understanding regarding the purpose and design of M&E systems often results in failing M&E systems. Think of a mismatch between program characteristics and evaluation design, or methods that are unable to answer evaluation questions. So, acknowledging and addressing contradictory demands is essential. This informs recognizing, acknowledging, and making choices regarding various design options in M&E, to facilitate the creation of well-functioning evaluation systems.

There are different ways to approach these competing demands, with differing implications. When approached as trade-offs, one demand is preferred over the other. M&E systems may consequently be designed solely for learning, and not accountability. Although this may seem a simple and straightforward solution to avoid complications, it hampers legitimacy as well as functions and uses of M&E systems. Therefore, we advocate to approach contradictory demands as paradoxes. This means recognising competing demands and accommodating them together in a single M&E system.

Five paradoxes in M&E

Based on our experiences with designing, implementing, and managing the M&E system for 2SCALE, a complex multi-stakeholder programme to advance food security through inclusive agribusiness in Sub-Sahara Africa, we identify five paradoxes. By describing the paradoxes, we hope to provide a language to identify, discuss and act upon competing demands. We give insights from this case as illustration of how we have tried to accommodate all into one single system. As such, we do not claim this is the preferred approach. Why not? First, we encountered that any attempt to embrace paradoxes is a temporal solution that leads to new paradoxes and challenges. There is no ultimate solution to competing demands and they therefore require continued attention. Second, although paradoxes may be universal, we expect ways of addressing them to be highly context-specific and dependent on the program as well as the M&E system.

The paradoxes, competing demands, their underlying logic, and our way of dealing with them are included in the table below.

Paradox The challenge Competing demands Our way of dealing with competing demands
Purpose Which purpose does the M&E system serve? M&E systems are expected to serve the purpose of learning, while simultaneously enabling accountability. The system combines Impact Pathways for learning with Universal Impact Indicators for accountability, fostering both learning processes among implementing partners and donor satisfaction.
Position How to position the evaluator vis-à-vis the program it evaluates. The evaluator is expected to be autonomous from the program to ensure objectivity, while simultaneously being involved to ensure alignment with the program. The team operates partially from universities the Netherlands for independence, while also embedding team members within country offices for closer program interaction, ensuring both objectivity and alignment.
Permeability How to engage surrounding stakeholders in the M&E system. The M&E system should be open to engagement of practitioners in design and operations to secure relevance, while simultaneously be closed from outside influences to ensure efficiency. The team initially welcomed input on system design but carefully chose when to adapt or maintain indicator methods to ensure the system's integrity, demonstrating flexibility and reliability.
Method How to design the M&E system to deliver optimal results. The M&E system should accommodate rigorous systematics to safeguard reliability and comparability, while simultaneously incorporating flexibility to accommodate dynamic and non-linear realities. The 2SCALE M&E system uses standardized yet adaptable formats like Impact Pathways, allowing for rigorous yet flexible evaluation tailored to each partnership's specific outcomes.
Acceptance How to design an M&E system that is acceptable from multiple perspectives. The M&E system should be extensive and robust to be credible and reliable, while simultaneously be understandable, lean, manageable, and economical. 2SCALE employs proxy indicators and primary data to assess impacts such as improved access to nutritious foods for base-of-the-pyramid consumers, ensuring credibility without overwhelming resource use.

So what?

To summarize, M&E is a complicated endeavor and evaluators often face competing demands. These range from simultaneous expectations about involvement as well as autonomy from the program evaluated, to rigorous and reliable M&E systems that are simultaneously sufficiently flexible to adapt to changing intervention logics. We argue that identifying and accommodating these competing demands helps to ensure legitimacy and success of M&E systems. We don’t think we have identified all paradoxes in evaluation, so let us know if you encounter a paradox in your work!

Resource recommendation

'Becoming aware of contradictory demands on evaluation systems' is referenced in: