Participation not for you? Four reflections that might just change your mind

Leslie Groves and Irene Guijt's picture 12th June 2015 by Leslie Groves and Irene Guijt

This month we start a series on participation in evaluation​ by Leslie Groves and Irene Guijt. Leslie has recently completed a global mapping for the UK Department for International Development of how evaluators engage with beneficiaries throughout the evaluation cycle. Irene has documented for UNICEF how to make impact evaluation more participatory, and is working with The MasterCard Foundation to make their principle of ‘listening deeply and elevating voice’ real.

This blog series aims to explore one simple question: How can we best open up evaluation processes to include those intended to benefit from a specific project, programme or policy? A simple question. Yet one that is surprisingly often not considered or quickly dismissed in international development.

Having been deeply involved in the 1990s with participatory development, we have watched the lack of clarity or depth with which the terms ‘participatory evaluation’ or ‘participation in evaluation’ are being used in recent times. We know of many cases where participatory evaluation has been watered down to simply mean ‘we asked intended beneficiaries what they thought’. But wouldn’t most of us consider this basic good evaluation practice? For us, being more participatory in evaluation practice can and should be about much more than just asking people to answer our questions. Recently, we have both looked into how participation features within evaluation practice. We have noticed the decline of the ‘P’ word and a surge of renewed interest in participatory practice via the new ‘F’ word – feedback.

During this blog series beginning in June, we will reflect on why participation in evaluation matters and how we can go about putting it into practice. We would like to also offer you the opportunity to discuss these issues and ask questions in a webinar in July. 

Q&A / webinar

In response to popular demand, Irene Guijt and Leslie Groves held a Q&A on the reflections presented in their blog series on participation in evaluation. View the recording below.

We offer this blog series to encourage readers to extend their thinking and practice about participation in evaluation.

Week 1

This week’s blog introduces four reflections from our experiences and research, each of which will be elaborated in future blogs.

Fuzzy language, fuzzy expectations, fuzzy practice

  1. Evaluation commissioners and M&E practitioners refer to participation and evaluation in very different – and often imprecise – ways. Terminology matters - using the word ‘participation’ more precisely can clarify expectations and guide quality of evaluation practice.

Have a think about how you use the term ‘participation’ in the context of evaluation? Are you referring to a one-way form of communication? For example, where a person responds to your evaluation questions. Or maybe to a conversation between yourself and the person to discuss which are the most helpful evaluation questions or monitoring indicators to be using? Maybe this conversation is one where you have the power to make the decisions about what gets collected and why, who should be involved in analysis and how? Or maybe you use the term participation in the ‘participatory evaluation’ sense where there is joint decision making throughout? Or is it a bit of each within one particular evaluation process?

Each of these different options is valid. But calling them all ‘participatory evaluation’ has led to a confusing state of affairs.

For us, “participatory evaluation” requires primary stakeholders to be included as co-evaluators, both to ensure the inclusion of their voices and values in evaluation and to help them strengthen their evaluation capacity. Where principles of empowerment and accountability to intended beneficiaries do not underpin the evaluation process, it is still relevant and possible to include participatory methods during evidence gathering, for example, or to take a participatory approach to survey design, implementation and analysis. 

Talking about ‘participation in M&E’ and being precise about what you are doing when you use the term will help clarify different levels of aspiration and align expectations and practice.

Participation in evaluation is not about a method

  1. Participation in evaluation involves far more than choosing certain data collection methods. It is about systematically thinking through why more participation of different stakeholders might be important and how this is best done at particular stages of the evaluation process.

Do you know of people who said ‘we did community mapping’ or ‘we asked for stories of change’ or ‘we did an SMS survey’ to explain how they had undertaken participatory evaluation?

Assuming certain kinds of methods are inherently more participatory is problematic. Firstly, the kinds of visual, qualitative methods or digital feedback methods often referred to as ‘participatory’ can be used in more or less participatory ways, depending on the intent of those in charge of the evaluation process. Methods not commonly associated with being participatory can, in fact, be highly owned by local citizens, such as community defined and implemented quantitative surveys and participatory statistics.

Secondly, the choice of methods is only a small part of making evaluation  (more) participatory. Strategic and practical choices are made about when in the evaluation process one is considering active roles for people intending to benefit. Making an evaluation more participatory will also be shaped by interest from commissioners, scope for organisational learning and adaptive programming, whether there is a need for local learning and which types of data are considered more or less rigorous.

In Blog 2, we will reflect more on words and methods.

Diverse purposes, diverse outcomes

  1. Many might agree that participation in evaluation is a good thing to do. But is it also a smart thing to do? Making evaluation more participatory can fulfil a range of purposes with diverse outcomes. The diversity of options for more participation in evaluation makes it a serious possibility for any evaluation process.

Taking the time to make informed choices together about who should participate, when and how can enhance evaluation in many ways. Some might pursue increasing the influence of a wide group of people over evaluation processes because it contributes to meeting necessary ethical standards for evaluation practice (e.g. inclusion, informed consent). Others might recognise the benefits participation offers for more rigorous and robust evaluation processes and evidence generation. Others might value the contribution to increasing developmental outcomes, such as Empowerment Evaluation.

Information is power. Excluding people from helping prioritise questions, collect information, analyse or share findings can be disempowering and counter-productive. It can sustain existing power relations where only certain people are given access to evaluation findings or can express their views of a particular initiative. Learning about what does or does not work about an initiative in a given context could be useful for others to increase impact or avoid repeating mistakes.

In Blog 3, we consider different options available to evaluators seeking to mix and match different ways of enhancing participation for their situation.

“Participation in evaluation requires too much time and is too costly.”

  1. Resistance to participation in evaluation is often rooted in assumptions about supposed high cost or time intensity, threat to evaluation independence or lack of interest on the part of evaluation commissioners. Such hesitance can be eased by busting a few myths.

To whet the appetite here are two common concerns, with more to come in Blog 4.

Making evaluation more participatory does not always require significant additional time. Some ways to enhance participation can actually save time. For example, innovative ways that inform people about an upcoming evaluation costs little time. Yet, it can increase participation and quality of evidence gathering through giving people a chance to prepare answers in advance, to consult with others, and to actually be in a position to provide informed consent to take part in the evaluation.

Obtaining rigorous evidence clearly comes at a cost. Costs are, however, based on understanding that increasing participation in evaluation could ensure benefits: the most appropriate design that asks the right questions of the right people; the most appropriate form of evidence gathering and; validation and communication of results are validated with those who will use findings. Commissioners need to assess such cost-benefit considerations.

Hearing about your experiences

It is our experience that enhancing participation in evaluation involves primarily a shift in mindset, and open conversations between evaluators and commissioners. While participatory evaluation is not an option for everyone, it is feasible and necessary to find ways to enable more influence by those intended to benefit from interventions being evaluated.

Do these four reflections resonate with your own experiences?

What other reflections or concerns about participation in evaluation would you like to share? Please add a comment below to share your experiences. We look forward to hearing from you.

Image: Your Participation is Requested, by I Am, via Flickr

A special thanks to this page's contributors
Author
Independent consultant.
Shoreham by Sea, United Kingdom.
Author
Learning by Design.
Australia.

Comments

David Week's picture
David Week

I tend to support any argument that helps guarantee the rights of people to have a say in (if not over) processes that affect their future—including evaluation process. I think the specific arguments in this article could go further they do. They are tend towards instrumental: participation can make evaluation "better"—while glossing over the question of "better in whose eyes?"

Because of ethics of the client-professional relationship, and the persuasive force of economic influence, "better" is almost always defined as "better in the eyes of the payer." In such a frame, it's not surprising that participation is often sidelined or diminished, and evaluators are forced to resort as a practical necessity to sell participation to the payer as "better by your own lights". The arguments are useful within the limits of that frame.

Participation is often framed as the question of "they" being invited to participate in "our" evaluation or project. The inverse is also interesting: to ask by what right, from what invitation, under what aegis were we invited to participate in their lives? Often, the answer is "by the invitation of those who have power over them"; and once we have this answer the reason for the difficulties in getting them invited in become apparent. It's very rare to for those in power to invite those over whom they have power to share that power—except in limited forms, and under the conditions that it be in the interests of those powers. 

I was reading recently an essay which opened with a quote from the Maori evaluator Linda Tuhiwai Smith (1999, p. 173). The quote seems to get closer to the core question, which is not about who gets to participate in what, but who has power and final say in the lives of the "beneficiaries"—the beneficiaries, or external instrumentalities. 

"In Maori communities today, there is a deep distrust and suspicion of research. This suspicion is not just of nonindigenous researchers, but of the whole philosophy of research and the different sets of beliefs which underlie the research process. Even in very recent studies this hostility or negative attitude to research in general has been noted. Research methodology is based on the skill of matching the problem with an ‘appropriate’ set of investigative strategies. It is concerned with ensuring that information is accessed in such a way as to guarantee validity and reliability. This requires having a theoretical understanding, either explicitly or implicitly, of the world, the problem, and the method. When studying how to go about doing research, it is very easy to overlook the realm of common sense, the basic beliefs that not only help people identify research problems that are relevant and worthy, but also accompany them throughout the research process.

Researchers must go further than simply recognizing personal beliefs and assumptions and the effect they have when interacting with people. In a cross- cultural context, the questions that need to be asked are ones such as:

Who defined the research problem?

For whom is this study worthy and relevant? Who says so?

What knowledge will the community gain from this study?

What knowledge will the researcher gain from this study?

What are some likely positive outcomes from this study?

What are some possible negative outcomes?

How can the negative outcomes be eliminated?

To whom is the researcher accountable?

What processes are in place to support the research, the researched, and the researcher?

LeslieGroves's picture
Leslie Groves

David- what a great first comment to receive. The questions that you raise are just so essential to explore from the earliest moments of conception and design through to our communication and dissemination strategies. Yet so very very rarely asked. We will explore some of these in the checklists and other tools that we will highlight in Blog 3, along with some new ones that you have brought in.

Do you have experience of evaluations that addressed these questions? For example, where research questions were not pre-defined or were defined by those expected to take part in the research? Where sharing of knowledge generated by the evaluation with the "community" led to enhanced outcomes for them (other than continued funding of the project, for example)? If so, it would be really excellent to get some examples of what happened when these questions were asked and answered. 

 

mjl025's picture
Michael Longhurst

As Lesley and Irene write, “terminology matters”. This is no more so that the term of “beneficiary”, the remnant of a post-colonial discourse.  As David Week notes in his comment on this topic, the term can refer to “external instrumentalities”, depending on the perspective of the user. From the perspective of a host country community, the beneficiaries of assistance are the donor agencies, their managing contractors, NGOs, consultants, logistic companies, finance managers, and maybe some in their government ministry who will be seconded to the assistance package. To a host country community, the term of “beneficiaries” is bilas for a range of more overtly subservient terms such as target group or client.

My suggestion is that an equitable term for the aid industry to use instead of the paradoxical term of “beneficiary” is “host country (education, health, rural, etc) community”.

LeslieGroves's picture
Leslie Groves

Many thanks Michael for this. As I did the research for the DFID working paper on "beneficiary feedback", it was interesting to note how much energy we all spent on de-constructing the term "beneficiary" as opposed to the term "feedback". It is a term that brings up very strong feelings. I myself keep going round in circles.

My sense is that we haven't yet found a commonly accepted alternative, though many have tried. The problems with "community" are well known (Irene co-edited a book on this in 1998 - the Myth of Community). "Host" as a term maybe tackles one form of power (donor-beneficiary) but hides other forms of power i.e. who is inviting who in, consciously or not, to do exactly what and for whom?   

I fully agree with you that the industry as a whole is a primary beneficiary. However, I do think that most of us do intend to bring some benefit to those with whom or in whose name we work? So maybe that takes us to the point where we are all intended beneficiaries? And that we still need to do our stakeholder analysis to find out who in the "community" actually benefits and how and who doesn't and why? 

BCarpenter's picture
Ben Carpenter

Hi, this is a great selection of blogs. It certainly does resonate with my experiences. At Social Value UK (formerly the SROI Network) our number one principle is "Involve Stakeholders". 

Using the term 'stakeholder' instead of 'beneficiaries' I think tackles some of the issues that Michael Longhurst raises. Funders, staff, beneficiaries - anyone who is affected by your project's activities is a 'stakeholder'.

Involving stakeholders is about shifting power. It also forces you to consider the purpose of the evaluation. What is evaluation if it does not make you more accountable to the people whom the project is designed for?

There is a great article about this on the SSIR that claims "Any approach to measuring social impact that doesn’t include a transfer of power to stakeholders is just marketing."

http://www.ssireview.org/blog/entry/people_power_and_accountability

At the risk of shameless promotion here is a link to a guide we produced on Stakeholder involvement. I think this tackles a lot of the issues raised in this series of blogs.

file:///C:/Users/Ben%20Carpenter/Downloads/Supplementary%20Guidance%20on%20Stakeholder%20Involvement%20(PDF).pdf

 

David Week's picture
David Week

Hi Leslie. Thank for the kind remarks. I will answer your question shortly. In the meantime I just read this amazing statement, and thought it worth sharing:

According to Blackburn (2000), ‘until the buzz-word participatory stepped into the spotlight, it was common to describe any bottom-up or grassroots approach as Freirian.’

http://eprints.lse.ac.uk/29193/1/IWP11Barroso.pdf

IGuijt's picture
Irene Guijt

Thank you, everyone, for these thought-provoking contributions. Yes, it is about power and who decides over whose voices, what types of evidence, which approaches, whose timing, and so forth is prioritised and shapes the evaluation process. What space do evaluation commissioners and evaluators allow for sharing decisions in what are overwhelmingly externally-initiated processes? Creating space requires being clear about the importance of that space. Our next blog will include more reflections on that aspect. 

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.