What are some processes that can be used to get agreement on the Key Evaluation Questions?

The material from BetterEvaluation comes from a combination of curating existing material and co-creating new material.  This blog is part of an ongoing series about material that we have co-created with BetterEvaluation users.

It shares material that was jointly developed through a challenge process at the 2017 Australasian Evaluation Society conference in Canberra in September. You can view the previous blog in this series here: What are some options and processes to help stakeholders articulate how they think a program works? (AES17 co-creation challenge #1)

Thank you to everyone who contributed to this co-creation challenge!

Key Evaluation Questions (KEQs) are the high-level questions that an evaluation is designed to answer - not specific questions that are asked in an interview or a questionnaire. Having an agreed set of Key Evaluation Questions (KEQs) makes it easier to decide what data to collect, how to analyze it, and how to report it. Our Specify the Key Evaluation Questions Task in the Rainbow Framework discusses some considerations to keep in mind when developing your KEQs:

Key Evaluation Questions should be developed by considering the type of evaluation being done, its intended users, its intended uses (purposes), and the evaluative criteria being used.  In particular, it can be helpful to imagine scenarios where the answers to the KEQs being used - to check the KEQs are likely to be relevant and useful and that they cover the range of issues that the evaluation is intended to address. 

However, while it's great to say you should develop your KEQs in consultation with the evaluation's intended users, there's a gap in terms of advice about how to actually go about this. That's why we chose this as our second co-creation challenge. 

From the responses we received, a key recommendation was listening, and creating appropriate processes and safe spaces for listening, as well as finding ways to return to key decisions that needed to be made and prioritising. Here are the recommended processes:

  1. Build ongoing communication and regular meetings into the process to gain stakeholder and reference group input. Developing KEQs can be a tricky process, involving trust and negotiation of different opinions, identities, and perspectives. Regular meetings can help to build this trust and help both you and the other stakeholders to create a better understanding of where different people's priorities are coming from. You can find some resources on meeting processes on our Formal Meeting Processes method page in the Rainbow Framework.

  2. Involve people with reporting oversight, ie. grant managers/recipients, managers and staff. These people are going to have specific data and reporting needs and so it's a good idea to find out what these are and make sure the KEQs are relevant to these. Stakeholder mapping and analysis can help with this.

  3. Create an internal and informal online space where participants or stakeholders can add or develop ideas and comments. This boils down to making it easy for people to participate. We're all busy, and some of us are shy - an online space allows people to contribute when it suits them, and without the pressure of having to give their opinion in front of an audience. There are different ways to do this - you might create a private forum or email group (like Google Groups), or upload a document or spreadsheet to Google Drive or Dropbox that people can comment on, or investigate online, team-based collaboration software (e.g. Trello or Slack) that will allow you to interact with your relevant stakeholders online.

  4. Use narrowing techniques to group and then prioritise responses: Card sort is one way of facilitating the grouping of ideas, or you might create a group mind map (see 'mudmaps' in our last blog in this series) to see how the ideas in the KEQs connect. To prioritise, you could try using dotmocracy or feedback frames to get people to prioritise the questions that are most important to them. There may also be a set or suggested list of priorities for the organisation, for example this CDC resource on developing KEQs lists the following criteria for prioritising evaluation questions:

  • Important to program staff and stakeholders
  • Address important program needs
  • Reflect five-year program goals, strategies, and objectives of your program.
  • Can be answered with available resources, including funds and personnel expertise.
  • Can be answered within the available timeframe.
  • Provide information to make program improvements
  • Will be supported by your school health program administration [or the key decision makers in your area]

There is also a worksheet listed in the resources below that provides a template that can help to organise, prioritise and select the evaluation questions.

  1. Focus on what decisions are required, and why to ensure that everyone is aware of the relevance. Going back to your primary intended users is a good way to help with this.

  2. Return to the program logic and other existing project documents and briefs as a way to bring stakeholders back to the project or program's aims (see the Method page Develop Programme Theory)

  3. Acknowledge what you already know, you might already have a 'good enough' answer to some of the identified questions. You'll likely always have more questions, but to keep your evaluation in scope it can be a good idea to pick the ones that are going to have the most impact in terms of improving your knowledge.

  4. Keep it real - you may have to remind stakeholders to keep realistic about what can be done in a single evaluation. What questions can you actually investigate within the scope of the evaluation? Creating an evaluation matrix (which shows the data sources for each Key Evaluation Question) and working through this with stakeholders can be a useful way to frame this conversation.

Additional perspectives from practice

If your journey to agreement about KEQs is still seeming a bit overwhelming, take comfort in the knowledge that you're not alone.

You can find a good rant by Rick Davies here about the use of evaluation questions (or, to be specific, about being handed multi-page lists of sundry open-ended questions by a commissioner) - and the discussions in the comments below illustrate some good examples of how people have dealt with situations like this in their work.

Another great example I like is from Michael Quinn Patton's book Utilization-Focused Evaluation (2008, pp. 49-51). You can read the full extract on Google Books here - it's a lovely anecdote about Michael trying to engage a room full of hostile stakeholders in order to identify a set of evaluation questions and concerns. As both sides are growing increasingly frustrated,  and are on the verge of calling off the evaluation, Michael asks everyone in the group to fill in the blank 10 times "What I want to know from the evaluation is ________"  - and this, and the subsequent process of narrowing down everyone's questions, completely turns the evaluation around. It's definitely worth a read. For more information, on the UFE you can also check out page 52 of this (Exhibit 2.3), which lists five Criteria for Utilization-Focused Evaluation Questions and read our Utilization-Focused Evaluation approach page.

Do you have any other ideas on how to get agreement on KEQs? Or perhaps some examples from your own practice about what's worked well (or not worked)? We'd love to hear them in the comments below.

Resources

We have a number of resources on the site that can help with focusing key evaluation questions and guiding discussions around these.

Evaluation Checklist for Program Evaluation

Created by Lori Wingate and Daniala Schroeter, this checklist aims to aid in developing effective and appropriate evaluation questions and in assessing the quality of existing questions. It identifies characteristics of good evaluation questions, based on the relevant literature and experience with evaluation design, implementation, and use. 

CDC: Checklist to help focus your evaluation

This checklist, created by the Centers for Disease Control and Prevention (CDC), helps you to assess potential evaluation questions in terms of their relevance, feasibility, fit with the values, nature and theory of change of the program, and the level of stakeholder engagement. It's been recommended to BetterEvaluation by multiple people over the years.

Evaluation questions

This webpage from EuropeAid provides a detailed guide on the development of evaluation questions, including using program logic to directly and indirectly guide question development.

Developing Process Evaluation Questions

This guide by the Centers for Disease Control and Prevention defines evaluation questions and outlines the process needed to develop them.  

Prioritise and Eliminate Questions

This worksheet from Chapter 5 of the National Science Foundation's User-Friendly Handbook for Mixed Method Evaluations provides a template which allows the organisation and selection of possible evaluation questions. It outlines the criteria that can be used when deciding on the questions, such as:  Who wants to know?  Will the information be new or confirmatory?  How important is the information to various stakeholders? Is it feasible?