Key Evaluation Questions (KEQs) are the high-level questions that an evaluation is designed to answer - not specific questions that are asked in an interview or a questionnaire. Having an agreed set of Key Evaluation Questions (KEQs) makes it easier to decide what data to collect, how to analyze it, and how to report it.
KEQs usually need to be developed and agreed on at the beginning of evaluation planning - however sometimes KEQs are already prescribed by an evaluation system or a previously developed evaluation framework.
Try not to have too many Key Evaluation Questions - a maximum of 5-7 main questions will be sufficient. It might also be useful to have some more specific questions under the KEQs.
Key Evaluation Questions should be developed by considering the type of evaluation being done, its intended users, its intended uses (purposes), and the evaluative criteria being used. In particular, it can be helpful to imagine scenarios where the answers to the KEQs being used - to check the KEQs are likely to be relevant and useful and that they cover the range of issues that the evaluation is intended to address. (This process can also help to review the types of data that might be feasible and credible to use to answer the KEQs).
The following information has been taken from the New South Wales Government, Department of Premier and Cabinet Evaluation Toolkit, which BetterEvaluation helped to develop.
Here are some typical key evaluation questions for the 3 main types of evaluation:
Key evaluation questions for the main types of evaluation
Type | Typical key evaluation questions |
---|---|
Process evaluation |
How is the program being implemented? |
Outcome evaluation (or impact evaluation) |
How well did the program work? |
Economic evaluation (cost-effectiveness analysis and cost-benefit analysis) |
What has been the ratio of costs to benefits?
|
Appropriateness, effectiveness and efficiency
Three broad categories of key evaluation questions to assess whether the program is appropriate, effective and efficient are often used.
Organising key evaluation questions under these categories, allows an assessment of the degree to which a particular program in particular circumstances is appropriate, effective and efficient. Suitable questions under these categories will vary with the different types of evaluation (process, outcome or economic).
Typical key evaluation questions | |
---|---|
Appropriateness | To what extent does the program address an identified need?How well does the program align with government and agency priorities?
Does the program represent a legitimate role for government? |
Effectiveness | To what extent is the program achieving the intended outcomes, in the short, medium and long term? To what extent is the program producing worthwhile results (outputs, outcomes) and/or meeting each of its objectives? |
Efficiency | Do the outcomes of the program represent value for money? To what extent is the relationship between inputs and outputs timely, cost-effective and to expected standards? |
Example
The Evaluation of the Stronger Families and Communities Strategy used clear Key Evaluation Questions to ensure a coherent evaluation despite the scale and diversity of what was being evaluated – an evaluation over 3 years, covering more than 600 different projects funded through 5 different funding initiatives, and producing 7 issues papers and 11 case study reports (including studies of particular funding initiatives) as well as ongoing progress reports and a final report.
The Key Evaluation Questions were developed through an extensive consultative process to develop the evaluation framework, which was done before advertising the contract to conduct the actual evaluation.
1. How is the Strategy contributing to family and community strength in the short-term, medium-term, and longer-term?
2. To what extent has the Strategy produced unintended outcomes (positive and negative)?
3. What were the costs and benefits of the Strategy relative to similar national and international interventions? (Given data limitations, this was revised to ask the question in ‘broad, qualitative terms’
4. What were the particular features of the Strategy that made a difference?
5. What is helping or hindering the initiatives to achieve their objectives? What explains why some initiatives work? In particular, does the interaction between different initiatives contribute to achieving better outcomes?
6. How does the Strategy contribute to the achievement of outcomes in conjunction with other initiatives, programs or services in the area?
7. What else is helping or hindering the Strategy to achieve its objectives and outcomes? What works best for whom, why and when?
8. How can the Strategy achieve better outcomes?
CIRCLE (2008) Stronger Families and Communities Strategy 2000-2004: Final Report. Melbourne: RMIT University.
The KEQs were used to structure progress reports and the final report, providing a clear framework for bringing together diverse evidence and an emerging narrative about the findings.
The Managers' Guide
Resources
Guides
- A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions: provides a detailed, step by step guide to engaging stakeholders in the development of evaluation questions. (Robert Wood Johnson Foundation)
- Looking Back, Moving Forward: SIDA Evaluation Manual: provides a step-by-step guide to formulating evaluation questions. (p 70 - 72, SIDA)
- Evaluation questions: provides a comprehensive guide to the use of evaluation questions. (EuropeAid)
Tools
- Stakeholders’ Interest in Potential Evaluation Questions: provides a template for developing evaluation questions which engage stakeholders' interest in the process. (National Science Foundation).
- Prioritize and Eliminate Questions: provides a template which allows the organisation and selection of possible evaluation questions. (National Science Foundation).
KEQ Checklists
- CDC: Checklist to help focus your evaluation: This checklist, created by the Centers for Disease Control and Prevention (CDC), helps you to assess potential evaluation questions in terms of their relevance, feasibility, fit with the values, nature and theory of change of the program, and the level of stakeholder engagement.
- Evaluation Checklist for Program Evaluation: This checklist by Lori Wingate and Daniela Schroeter's distills and explains criteria for effective evaluation questions. It can be used to aid in developing effective and appropriate evaluation questions and in assessing the quality of existing questions. It identifies characteristics of good evaluation questions, based on the relevant literature and the author's own experience with evaluation design, implementation, and use.
Examples
- Evaluation at country level, regional level, sector or thematic global evaluation: provides a range of example evaluation questions (EuropeAid)
Cite this page
BetterEvaluation. (2016) Specify the Key Evaluation Questions (KEQs). Retrieved from: http://betterevaluation.org/en/plan/engage_frame/decide_evaluation_quest...
Comments
This page does an excellent job of providing a large number of practical questions that can be used across different types of evaluations. However, it doesn't distinguish between research questions and evaluation questions. This differentiation is crucial to ensure that what we are calling an evaluation is not merely a piece of applied research. For something to count as an evaluation, it must investigate questions related to merit worth and significance (Scriven, 2015) or in simpler terms, the "goodness of a program" (Gullickson, 2018) and not simply describing what is happening in a program (or other form of evaluand). This issue is explored in depth in Nunns, Peace, and Witten's 2015 article Evaluative reasoning in public-sector evaluation in Aotearoa New Zealand.
As Gullickson (2018) describes "often what is called an evaluation question is simply a research question, which just requires plain descriptive or causal answers...In the hierarchy of evaluation, research questions serve and provide information to answer evaluation questions." Research questions, which are merely descriptive, should be embedded inside evaluation questions as follows:
EQ1->RQ1
->RQ2
EQ2->EQ2a->RQ3
->EQ2b->RQ4
->RQ5
As Gullickson further describes, "evaluation questions get at the heart of what makes the evaluand good, valuable or worthwhile. They go directly to questions of merit, worth, and/or significance.
Common characteristics shared by research and evaluation questions:
Characteristics that make a question an evaluation question:
When you are preparing for an evaluation (and preparing your clients) checking your questions against these last two characteristics can be a good litmus test of whether they want an evaluation—which comes with a judgement—or just research to tell them what’s going on with their programs. Either can serve their needs, but one isn’t evaluation (by our definition)—and you need to ascertain if you and the client are on the same page about what they need" (Gullickson, 2018)" (Gullickson, 2018).
Some examples of how research questions can be transformed into evaluation questions are provided below (Gullickson, 2018)
Research questions
Research questions transformed into evaluation questions
References
Gullickson, A. (2018). Practice of Evaluation [course materials]. Melbourne, Victoria: University of Melbourne, EDUC90847.
Nunns, Peace, and Witten (2015). Evaluative reasoning in public-sector evaluation in Aotearoa New Zealand: How are we doing? Evaluation Matters—He Take Tō Te Aromatawai. New Zealand Council for Educational Research 1: 2015 http://www.nzcer.org.nz/system/files/journals/evaluationmaters/downloads...
Scriven, M. (2015) Key Evaluation Checklist. Retrieved from https://wmich.edu/evaluation/checklists
very helpful!
very helpful...Thanks
Add new comment
Login Login and comment as BetterEvaluation member or simply fill out the fields below.