Participatory evaluation

Available languages
Contributing author
Cristina Sette

Participatory evaluation is an approach that involves the stakeholders of a programme or policy in the evaluation process.

This involvement can occur at any stage of the evaluation process, from the evaluation design to the data collection and analysis and the reporting of the study. A participatory approach can be taken with any impact evaluation design, and with quantitative and qualitative data. However, the type and level of stakeholder involvement will necessarily vary between different types, for example between a local level impact evaluation and an evaluation of policy changes (Gujit 2014, p.1). It is important to consider the purpose of involving stakeholders and which stakeholders should be involved how in order to maximise the effectiveness of the approach.

Campilan (2000) indicates that participatory evaluation is distinguished from the conventional approach in five key ways:

  • Why the evaluation is being done
  • How evaluation is done
  • Who is doing the evaluating
  • What is being evaluated
  • For whom evaluation is being done.

It is often practised in various ways, such as self-assessment, stakeholder evaluation, internal evaluation and joint evaluation. In addition, it can include individual story-telling, participatory social mapping, causal-linkage and trend and change diagramming, scoring, and brainstorming on program strengths and weaknesses.

Advantages of doing participatory evaluation

  • Identify locally relevant evaluation questions
  • Improve accuracy and relevance of reports
  • Establish and explain causality
  • Improve program performance
  • Empower participants
  • Build capacity
  • Develop leaders and build teams
  • Sustain organizational learning and growth

Challenges in implementing and using participatory evaluation

  • Time and commitment
  • Resources
  • Conflicts between approaches
  • Unclear purpose of participation, or a purpose that is not aligned with evaluation design
  • Lack of facilitation skills
  • Only focusing on participation in one aspect of the evaluation process, e.g. data collection
  • Lack of cultural and contextual understanding, and the implications of these for the evaluation design

As Irene Gujit notes:

"The benefits of participation in impact evaluation are neither automatic nor guaranteed. Commissioning such approaches means committing to the implications for timing, resources and focus. Facilitation skills are essential to ensuring a good quality process, which in turn may require additional resources for building capacity." (Gujit 2014, 18)

Example

Supporting indigenous governance in Colombia

In Colombia, ACIN, an association of indigenous people covering 13 communities, is involved in monitoring and evaluating its own multi-sectoral regional development plan. They are looking at links between productivity and environmental and cultural factors, tracking changes over time and comparing plans with results in a systematic way. This has helped communities recognise their strengths and improve their management capabilities, which, in turn, is leading to changes in power relationships. Links are being made between communities, providing the concerted voice needed in negotiations with national and provincial government, and the private sector. (Guijt and Gaventa, 1998)

Advice

Advice for CHOOSING this approach

There are a number of reasons to use this approach:

  • Involving stakeholders in the process of an evaluation can lead to "better data, better understanding of the data, more appropriate recommendations, [and] better uptake of findings" (Gujit 2014, p.2)
  • It is ethical to include the people to be affected by a programme or policy in the process to inform relevant decisions. 
  • The first step in a participatory approach is to clarify what value this approach will add, both to the evaluation and to the stakeholders who would be involved. Irene Gujit suggests three questions to ask when using this approach (2014, p.3):
    1. What purpose will stakeholder participation serve in this impact evaluation?
    2. Whose participation matters, when and why?
    3. When is participation feasible?

Advice for USING this approach

  • It is important to pilot any method of participatory evaluation to ensure safe and open engagement with participants, and that relevant indicators are included.
  • There are a number of ways to use participatory methods:
    • To collect qualitative and quantitative impact data
    • To investigate causality, for example through focus group discussions or interviews.
    • To negotiate differences and to validate key findings.
    • To score people’s appreciation of an intervention’s impact, such as a matrix ranking or spider diagram.
    • To assess impacts in relation to wider developments in the intervention area.
  • You can find more information on these in the UNICEF Methodological Brief on Participatory Approaches in impact evaluation.

Using the BetterEvaluation Framework to answer: Whose participation matters, when and why?

UNICEF's Methodological Brief on Participatory Approaches uses the BetterEvaluation Rainbow Framework to cluster a number of questions for someone seeking to use a participatory approach (Gujit 2014, pp.7-8):

Manage

Manage an evaluation (or a series of evaluations), including deciding who will conduct the evaluation and who will make decisions about it.

  • Who should be invited to participate in managing the impact evaluation? Who will be involved in deciding what is to be evaluated? 
  • Who will have the authority to make what kind of decisions?
  • Who will decide about the evaluators? Who will be involved in developing and/or approving the evaluation design/evaluation plan?
  • Who will undertake the impact evaluation?
  • Whose values will determine what a good quality impact evaluation looks like?
  • What capacities may need to be strengthened to undertake or make the best use of an impact evaluation?

Define

Develop a description (or access an existing version) of what is to be evaluated and how it is understood to work.

  • Who will be involved in revising or creating a theory of change on which the impact evaluation will reflect? 
  • Who will be involved in identifying possible unintended results (both positive and negative) that will be important? 

Frame

Set the parameters of the evaluation – its purposes, key evaluation questions and the criteria and standards to be used.

  • Who will decide the purpose of the impact evaluation? 
  • Who will set the evaluation questions? 
  • Whose criteria and standards matter in judging performance? 

Describe

Collect and retrieve data to answer descriptive questions about the activities of the project/programme/policy, the various results it has had and the context in which it has been implemented.

  • Who will decide whose voice matters in terms of describing, explaining and judging impacts? 
  • Who will help to identify the measures or indicators to be evaluated? 
  • Who will collect or retrieve data? 
  • Who will be involved in organizing and storing the data? 

Understanding causes

Collect and analyse data to answer causal questions about what has produced the outcomes and impacts that have been observed.

  • Who will be involved in checking whether results are consistent with the theory that the intervention produced them? 
  • Who will decide what to do with contradictory information? Whose voice will matter most and why? 
  • Who will be consulted to identify possible alternative explanations for impacts?

Synthesise

Combine data to form an overall assessment of the merit or worth of the intervention, or to summarize evidence across several evaluations.

  • Who will be involved in synthesizing data? 
  • Who will be involved in identifying recommendations or lessons learned? 

Report & support use

Develop and present findings in ways that are useful for the intended users of the evaluation, and support them to make use of findings.

  • Who will share the findings? 
  • Who will be given access to the findings? Will this be done in audience-appropriate ways? 
  • Which users will be encouraged and adequately supported to make use of the findings?

Resources

Guides

Websites

Blogs

Campilan, D. (2000). Participatory Evaluation of Participatory Research. Forum on Evaluation of International Cooperation Projects: Centering on Development of Human Resources in the Field of Agriculture. Nagoya, Japan, International Potato Center. https://web.archive.org/web/20140703095000/http://ir.nul.nagoya-u.ac.jp/jspui/bitstream/2237/8890/1/39-56.pdf (archived link)

Chambers, R. (2009) Making the Poor Count: Using Participatory Options for Impact Evaluation in Chambers, R., Karlan, D., Ravallion, M. and Rogers, P. (Eds) Designing impact evaluations: different perspectives. New Delhi, India, International Initiative for Impact Evaluation. https://web.archive.org/web/20120514104119/http://www.3ieimpact.org/admin/pdfs_papers/50.pdf (archived link)

Guijt, I. and J. Gaventa (1998). Participatory Monitoring and Evaluation: Learning from Change. IDS Policy Briefing. Brighton, UK, University of Sussex. http://www.ids.ac.uk/files/dmfile/PB12.pdf

Guijt, I. (2014). Participatory Approaches, Methodological Briefs: Impact Evaluation 5, UNICEF Office of Research, Florence. Retrieved from: http://devinfolive.info/impact_evaluation/img/downloads/Participatory_Approaches_ENG.pdf

Zukoski, A. and M. Luluquisen (2002). "Participatory Evaluation: What is it? Why do it? What are the challenges?" Policy & Practice(5). http://depts.washington.edu/ccph/pdf_files/Evaluation.pdf 

Last updated:

Expand to view all resources related to 'Participatory evaluation'

'Participatory evaluation' is referenced in: