Choices about voices

Four graphics of faces shouting into megaphones from the corners of the image, saying "Making Choices"

In this third blog in the participation in evaluation series, Irene Guijt and Leslie Groves share frameworks to approach and make decisions about the level of stakeholder involvement during different evaluation stages.

Thinking about participation of stakeholders in any evaluation process clearly involves much choice. Our first and second blogs illustrated the many directions open to evaluation commissioners and professionals who are seeking to make evaluation (more) participatory. ‘Stakeholder participation’ could involve anyone with an interest (even a marginal one) in the evaluation: from co-designers of the evaluation to those who are asked to share their experiences of an intervention. Involving all stakeholders equally intensely is almost never feasible or useful. So when evaluation proposals include statements such as ‘We will involve all stakeholders’, it is safe to conclude that insufficient thought has been put into the question of who matters and in which way. Much more precision is possible and needed. If you choose not to have a participatory evaluation (see blog 1 for our definition), you still have many options for more participation in an evaluation - requiring careful analysis about who can and should be invited to contribute and how they might contribute.

So how can you, as an evaluator or commissioner, decide how to bring deeper levels of participation to your evaluation process? This blog offers three different frameworks to help come to a practical and feasible approach for your specific situation. Each framework is based on a specific entry point that helps to think through the evaluation process systematically. The first is inspired by Irene’s work on ‘listening deeply and elevating voice’ with The MasterCard Foundation, a second framework is based on Leslie’s work for DfID on feedback in evaluation, and the third draws on Irene’s work for UNICEF on participation in impact evaluation. These three perspectives are framed from the perspective of commissioners or professional evaluators, as they were commissioned by funding agencies. But, for participatory evaluation, where intended beneficiaries are driving the entire process, they are still relevant. The basic question remains who to bring more fully into the process.

Entry point 1. Considering four aspects of participation

Participation in evaluation is, at a minimum, about creating space for people’s voice in an evaluation process. It starts with creating space for people to speak their minds. But it extends to include how those with decision-making power in the evaluation process listen and then take relevant action in culturally appropriate ways (see Framework 1 for more details on these four aspects). Thinking about these four aspects linked to the other two frameworks might help you identify what you need to put in place to support sincere participation and engagement.

Framework 1. Unpacking four aspects of voice (inspired by Oakden and McKegg 2015) 

Sharing. The people whose voices matter, particularly those often not heard, reflect their diverse views. They are supported to speak uncensored, in safe spaces and at moments where influence over the evaluation is strong. Methods are selected with this intention in mind.

Listening. The people who have power over implementation and strategies (local or external) are listening often enough and openly, with feedback about what they have heard, in ways that make stakeholders feel they have been heard.

Hands. Those who are listening are open to taking action, even if what they hear goes against what they hoped for, and are transparent with those who have provided input, what those actions are.

Cultural competence underpins the creation and use of spaces and mechanisms to enable sharing, listening and acting, and thereby supports the integrity of the cultures involved while not shirking from challenging injustices. 

Entry point 2. Selecting the most appropriate type of feedback flow for each stage in the evaluation cycle

Another complementary way to look at participation is suggested in Leslie’s report on beneficiary feedback in evaluation for DfID.  A simple framework is proposed to support evaluation commissioners and practitioners map out the different types of beneficiary feedback that they might choose, at each of the different stages of evaluation. The aim is to support decision making as to which type of beneficiary feedback is most appropriate in the given evaluation context. Framework 2 below highlights examples of each type of feedback at different evaluation stages. A blank copy of the framework and other tools including checklists for commissioners and evaluators can be downloaded from the report workspace. 

Framework 2. Feedback in the evaluation cycle with examples of less common practice


One-way feedback to beneficiaries

One-way feedback from beneficiaries

Two-way feedback - inter active conversation between intended beneficiaries and evaluators

Two-way feedback through participatory evaluation

Evaluation Design

Draft evaluation questions were shared with groups of young people prior to field visits. This was to ensure that a) consent to participate was informed and b) to provide young people with the opportunity to comment on/ modify the proposed questions.  (Plan UK Global Evaluation of Youth Governance Programme).



Information Gathering




All the enumerators for both the quantitative and qualitative aspect of the evaluation were intended beneficiaries (World Vision various)

Validation and Analysis



Data analysis and validation was conducted with community mayors and community representatives. A participatory ranking exercise using sticky dots and flipchart paper was conducted to assess the perception of the participants on the process and accuracy of the data. (World Vision various)


Dissemination and Communication

Evaluation findings shared in youth friendly format in local language (French and Spanish) with all participants. Plan UK




Entry point 3: Deciding based on evaluation tasks

The BetterEvaluation Rainbow Framework provides a detailed overview of the seven key tasks in an evaluation process. These tasks involve: managing the evaluation, defining and framing its focus, collecting data on changes, explaining changes, synthesising findings, and reporting and supporting use of the findings. For each task, the question ‘who is best involved and how’ can be asked. Framework 3 suggests some questions on participation that can bring clarity for choices about voice (see Guijt 2014 for more details on each task). 

Framework 3. Key evaluation tasks and examples of questions about participation

1.  Manage an evaluation – clarity about who will be involved in what decisions

  • Who will have decision-making power about what kind of decisions in the evaluation and who will be consulted? 
  • Whose values will determine what a good quality evaluation looks like? 
  • Whose and which capacities may need to be strengthened for optimal participation in evaluation? 

2.  Define - clarity about what is to be evaluated and how it is supposed to work

  • Who will be involved in revising or creating a theory of change on which the evaluation will reflect? 
  • Who will be involved in identifying possible unintended effects (beyond the theory of change)?

3.  Frame – clarity about purpose, key evaluation questions and criteria/standards

  • Who will decide about the evaluation purpose (s); who will be consulted about it? 
  • Who will set or be consulted about the evaluation questions?
  • Whose criteria and standards matter in judging performance? 

4.  Describe – clarity about activities, results and context

  • Who will decide whose voice matters in terms of describing results? 
  • Who will collect or retrieve data? 
  • Who will be involved in organizing and storing data?

5.  Understand causes – clarity about causality 

  • Who will be involved in using data to compare to the theory of change?
  • Who will decide what to do with contradictory information? Whose information will matter most? Why?
  • Who will help identify possible (alternative) explanations for outcomes?

6.  Synthesise - clarity about overall worth of intervention(s)

  • Who will be involved in synthesizing data – and who will validate data? 
  • Who will decide about recommendations or lessons learned – and who will be consulted? 

7.  Report and support use – clear findings to share

  • Who will share the findings with who and in ways that are use and audience-appropriate?
  • Which users need support to make use of the findings?

Keeping it simple with five questions

These lists of questions may seem quite extensive - we are not suggestions you use them all. They are simply different ways to help make choices about voices. Ultimately it is simply about being systematic about what is most appropriate at different stages of the evaluation. Ask yourself – as commissioner or evaluator:

  1. Why does participation in evaluation matter for this specific situation?
  2. Whose participation matters?
  3. When does participation in this evaluation process matter?
  4. What form will participation of different stakeholders take?
  5. Who needs what conditions to make the envisaged level of participation possible - time, resources, capacity, facilitation?

Hearing about your experiences

We are keen to hear of other ways that have helped you make choices and if these might have worked for you.

What other frameworks or questions have you found useful to make choices about voice in evaluation?

If you are in a position to try out one of these frameworks – or the five questions (or have already done so), did that help in any way? If not, what was missing? If it did, what became clearer?

Q&A / webinar

In response to popular demand, Irene Guijt and Leslie Groves held a Q&A on the reflections presented in their blog series on participation in evaluation. View the recording below.


'Choices about voices' is referenced in: