Participation not for you? Four reflections that might just change your mind
This month we start a series on participation in evaluation by Leslie Groves and Irene Guijt. This blog series aims to explore one simple question: How can we best open up evaluation processes to include those intended to benefit from a specific project, programme or policy?
A simple question. Yet one that is surprisingly often not considered or quickly dismissed in international development.
Leslie has recently completed a global mapping for the UK Department for International Development of how evaluators engage with beneficiaries throughout the evaluation cycle. Irene has documented for UNICEF how to make impact evaluation more participatory, and is working with The MasterCard Foundation to make their principle of ‘listening deeply and elevating voice’ real.
Having been deeply involved in the 1990s with participatory development, we have watched the lack of clarity or depth with which the terms ‘participatory evaluation’ or ‘participation in evaluation’ are being used in recent times. We know of many cases where participatory evaluation has been watered down to simply mean ‘we asked intended beneficiaries what they thought’. But wouldn’t most of us consider this basic good evaluation practice? For us, being more participatory in evaluation practice can and should be about much more than just asking people to answer our questions. Recently, we have both looked into how participation features within evaluation practice. We have noticed the decline of the ‘P’ word and a surge of renewed interest in participatory practice via the new ‘F’ word – feedback.
During this blog series we will reflect on why participation in evaluation matters and how we can go about putting it into practice.
This week’s blog introduces four reflections from our experiences and research, each of which will be elaborated in future blogs.
Fuzzy language, fuzzy expectations, fuzzy practice
1. Evaluation commissioners and M&E practitioners refer to participation and evaluation in very different – and often imprecise – ways. Terminology matters - using the word ‘participation’ more precisely can clarify expectations and guide quality of evaluation practice.
Have a think about how you use the term ‘participation’ in the context of evaluation? Are you referring to a one-way form of communication? For example, where a person responds to your evaluation questions. Or maybe to a conversation between yourself and the person to discuss which are the most helpful evaluation questions or monitoring indicators to be using? Maybe this conversation is one where you have the power to make the decisions about what gets collected and why, who should be involved in analysis and how? Or maybe you use the term participation in the ‘participatory evaluation’ sense where there is joint decision making throughout? Or is it a bit of each within one particular evaluation process?
Each of these different options is valid. But calling them all ‘participatory evaluation’ has led to a confusing state of affairs.
For us, “participatory evaluation” requires primary stakeholders to be included as co-evaluators, both to ensure the inclusion of their voices and values in evaluation and to help them strengthen their evaluation capacity. Where principles of empowerment and accountability to intended beneficiaries do not underpin the evaluation process, it is still relevant and possible to include participatory methods during evidence gathering, for example, or to take a participatory approach to survey design, implementation and analysis.
Talking about ‘participation in M&E’ and being precise about what you are doing when you use the term will help clarify different levels of aspiration and align expectations and practice.
Participation in evaluation is not about a method
2. Participation in evaluation involves far more than choosing certain data collection methods. It is about systematically thinking through why more participation of different stakeholders might be important and how this is best done at particular stages of the evaluation process.
Do you know of people who said ‘we did community mapping’ or ‘we asked for stories of change’ or ‘we did an SMS survey’ to explain how they had undertaken participatory evaluation?
Assuming certain kinds of methods are inherently more participatory is problematic. Firstly, the kinds of visual, qualitative methods or digital feedback methods often referred to as ‘participatory’ can be used in more or less participatory ways, depending on the intent of those in charge of the evaluation process. Methods not commonly associated with being participatory can, in fact, be highly owned by local citizens, such as community defined and implemented quantitative surveys and participatory statistics.
Secondly, the choice of methods is only a small part of making evaluation (more) participatory. Strategic and practical choices are made about when in the evaluation process one is considering active roles for people intending to benefit. Making an evaluation more participatory will also be shaped by interest from commissioners, scope for organisational learning and adaptive programming, whether there is a need for local learning and which types of data are considered more or less rigorous.
In Blog 2, we will reflect more on words and methods.
Diverse purposes, diverse outcomes
3. Many might agree that participation in evaluation is a good thing to do. But is it also a smart thing to do? Making evaluation more participatory can fulfil a range of purposes with diverse outcomes. The diversity of options for more participation in evaluation makes it a serious possibility for any evaluation process.
Taking the time to make informed choices together about who should participate, when and how can enhance evaluation in many ways. Some might pursue increasing the influence of a wide group of people over evaluation processes because it contributes to meeting necessary ethical standards for evaluation practice (e.g. inclusion, informed consent). Others might recognise the benefits participation offers for more rigorous and robust evaluation processes and evidence generation. Others might value the contribution to increasing developmental outcomes, such as Empowerment Evaluation.
Information is power. Excluding people from helping prioritise questions, collect information, analyse or share findings can be disempowering and counter-productive. It can sustain existing power relations where only certain people are given access to evaluation findings or can express their views of a particular initiative. Learning about what does or does not work about an initiative in a given context could be useful for others to increase impact or avoid repeating mistakes.
In Blog 3, we consider different options available to evaluators seeking to mix and match different ways of enhancing participation for their situation.
“Participation in evaluation requires too much time and is too costly.”
4. Resistance to participation in evaluation is often rooted in assumptions about supposed high cost or time intensity, threat to evaluation independence or lack of interest on the part of evaluation commissioners. Such hesitance can be eased by busting a few myths.
To whet the appetite here are two common concerns, with more to come in Blog 4.
Making evaluation more participatory does not always require significant additional time. Some ways to enhance participation can actually save time. For example, innovative ways that inform people about an upcoming evaluation costs little time. Yet, it can increase participation and quality of evidence gathering through giving people a chance to prepare answers in advance, to consult with others, and to actually be in a position to provide informed consent to take part in the evaluation.
Obtaining rigorous evidence clearly comes at a cost. Costs are, however, based on understanding that increasing participation in evaluation could ensure benefits: the most appropriate design that asks the right questions of the right people; the most appropriate form of evidence gathering and; validation and communication of results are validated with those who will use findings. Commissioners need to assess such cost-benefit considerations.
Hearing about your experiences
It is our experience that enhancing participation in evaluation involves primarily a shift in mindset, and open conversations between evaluators and commissioners. While participatory evaluation is not an option for everyone, it is feasible and necessary to find ways to enable more influence by those intended to benefit from interventions being evaluated.
Do these four reflections resonate with your own experiences?
What other reflections or concerns about participation in evaluation would you like to share? We look forward to hearing from you.
Q&A / webinar
Irene Guijt and Leslie Groves held a Q&A on the reflections presented in their blog series on participation in evaluation.