Still Hesitating? Let's bust some myths around increasing stakeholder participation in evaluation
In the final blog in the 4-part series, Leslie Groves and Irene Guijt address some of the most common forms of resistance to increasing levels of participation in evaluation.
Still hesitating about whether to increase stakeholder participation in your evaluation? Here is some myth busting that might just help.
"Participation in evaluation requires time that I simply don’t have."
This commonly expressed form of resistance is based on a false premise- that participation is optional in evaluation practice. Yet, any evaluation worthy of the name will have some level of participation from those expected to benefit from a specific initiative. Randomised Control Trials require participation in answering survey questions – they are renowned for being time intensive processes. Conducting a survey or running focus group discussions takes time. Yet no-one would dispute the time needed to conduct these. So, resistance does not seem to be related to the time required to take information from people. Resistance appears to be more around the time needed to obtain people’s input into areas over which evaluators traditionally hold the power- evaluation design, analysis of findings and dissemination. So we suggest that using the ‘not enough time’ excuse is strongly influenced by how comfortable we are with sharing our power, as discussed in Blog 2.
In practical terms, making evaluation more participatory does not always require significant additional time. Some ways to enhance stakeholder participation can actually save time. For example, innovative ways that inform people about an upcoming evaluation cost little time- maybe half an hour to design and print a poster placed outside a community health facility to inform people about the evaluation content and process. This quick step can increase both levels of participation and the quality of evidence gathering, through giving people a chance to prepare answers in advance, to consult with others, and to actually be in a position to provide informed consent to take part in the evaluation. This then saves significant time later on in the process.
In contrast, where the intention is to go broader and be participatory in design, validation and analysis phases then additional rounds of consultation will be required. And this does take time. Where evaluations have tight, unrealistic timeframes it may well be impossible to deepen or broaden levels of participation, never mind to conduct a participatory evaluation. It may not even be possible to conduct a good evaluation of any kind. This is where commissioners and evaluators need to be realistic and honest about what can and cannot be achieved within the timeframe allocated. A discussion may be a helpful way to decide whether additional time should be allocated to enable a deeper and/ or broader process of participation in the evaluation. The three frameworks presented in Blog 3 can help guide this discussion.
"Increasing participation in evaluation is too expensive."
Obtaining rigorous evidence clearly comes at a cost- but the question of whether obtaining this evidence in a participatory way is too expensive is subjective. The answer appears to be as dependent on our worldview and academic training as it is on a costed analysis of the different methods.
So what are the costs involved with deepening or broadening stakeholder participation in an evaluation? Depending on the depth of stakeholder participation required, you will need to think about holding additional consultations during different evaluation phases. This will involve additional costs. Or, you want local people to carry out research themselves, which may or may not save on consultancy costs, but will require training local people up, supporting them and paying for their time. Costs can be reduced through creative thinking- for example, holding focus group discussions using participatory methods before or after already scheduled community meetings.
The other question to explore is what the “too expensive” is being compared with? Is it being compared alongside an SMS survey, for example? This may require less cost in terms of stakeholder participation but does require financial investment in design, piloting, data inputting and data analysis. So maybe the costs are, in fact, comparable?
The bigger question, however, is one that goes far beyond the choice of specific methods, as discussed in Blog 1 and Blog 2. Our belief is that a participatory approach supports evaluation whatever the choice of methods, be they quantitative or qualitative.
Finally, when exploring the question of “too expensive”, it is important to assess costs alongside the potential benefits of deepening and broadening stakeholder participation in evaluation, such as those highlighted in Blog 1.
Independence is considered a key quality standard of an evaluation. The OECD DAC Quality Standards for Development Evaluation note in paragraph 3.2 that evaluators should be independent “from the development intervention, including its policy, operations and management functions, as well as intended beneficiaries.” This standard seeks to ensure that evaluations are not influenced by bias and partiality for or against the intervention or certain stakeholders. Some have interpreted this standard as meaning that deepening or broadening community participation in evaluation threatens independence and should therefore not be encouraged. This throws the baby out with the bath water. Instead of avoiding participation, evaluators should be encouraged to document their positioning, how they will reduce bias and retain the expected degree of objectivity.
This interpretation is also at odds with other aspects of the standards. Paragraph 1.4, for example, refers to an inclusive process involving intended beneficiaries from the early part of the process. Paragraph 1.6 refers to building the evaluation capacity of partners to support an environment of accountability and learning, and to stimulate demand and use of findings. The standards also explicitly refer to engaging relevant stakeholders in design and in commenting on draft reports (2.5 and 3.15). So the participation of stakeholders is also a key quality standard of evaluation processes – just one that has, to date, been valued less than others.
"The commissioners just aren’t interested."
Evaluation frameworks are currently driven largely by a focus on measuring results. And who better to explore results than different groups of intended beneficiaries? Yet, the necessary investment in deepening and broadening the participation of intended beneficiaries throughout the evaluation process is not common practice-either for commissioners or evaluators.
Lack of incentive is a clear issue in many international development agencies – their survival does not depend on any form of feedback from intended beneficiaries. Contrast this with the private sector, where client feedback is vital to organisational survival. The organisation is necessarily responsive, because its survival depends on client satisfaction and achieving appropriate results. In international development or humanitarian assistance, those intended to benefit from spending have few alternatives. People living in poverty or crisis aren’t usually given a choice about which donor or approach they want to go with.
Linked to the lack of incentives in the development and humanitarian sectors is the fact that many organisations do not include stakeholder participation in evaluation in their guidance and quality assurance mechanisms. For example, skill sets necessary to facilitate effective and meaningful stakeholder participation are often not included in Terms of Reference or valued by evaluation consultancies. Skills required would include listening skills, language skills, cultural sensitivity, facilitation, consensus building and collaborative problem solving.
Encouragingly, incentives seem to be changing with initiatives from US and UK NGO network bodies, and regular scrutiny by the UK Independent Commission for Aid Impact of DFID’s use of beneficiary feedback to inform programming. The World Bank has committed to gathering participant feedback in all of its projects that have clear participants. The US Senate passed the Consolidated and Further Continuing Appropriations Act 2015 that requires monitoring and evaluation functions to allocate funds to the regular collection and feedback from beneficiaries of humanitarian programmes. These have led to great interest in feedback mechanisms, one aspect of participation in evaluation.
We, as members of the evaluation community, also have a responsibility to foster interest. Do we systematically encourage commissioners to look at the options for participation? If not, why not? Are we prepared to relinquish some of our power and to trust that people can have important contributions to make to how we conduct our evaluations? It is our experience that commissioners are open to hearing from evaluation experts who are able to argue the case for the benefits of increasing levels of participation in evaluation processes.
Hearing about your experiences
So what do you think? Do these forms of resistance resonate with your experiences? Do you encounter other forms of resistance? How do you counter these?
Have you ever costed out increasing levels of participation in evaluation and weighed them up against benefits? What were the results?
Q&A / webinar
Irene Guijt and Leslie Groves held a Q&A on the reflections presented in their blog series on participation in evaluation.
'Still Hesitating? Let's bust some myths around increasing stakeholder participation in evaluation ' is referenced in:
Approach
Blog