Develop agreed key evaluation questions

Evaluation, by definition, must answer truly evaluative questions: it must ask not only ‘What were the results?’ (a descriptive question) but also ‘How good were the results?’ (an evaluative question). Depending on the type of evaluation, causal questions also need to be addressed (to what extent were the results due to the intervention?).

An evaluation should be focused around answering a small number of high-level key evaluation questions (KEQs) which are about performance overall. Each of these key evaluation questions (KEQs) should be further unpacked by asking more detailed questions about performance on specific dimensions of merit (related to evaluative criteria such as relevance, equity, effectiveness, sustainability). The KEQs also need to reflect the intended uses of the evaluation.

Good KEQs are:

  • limited in number: 7 ± 2 questions is a good number in general. This allows for coverage of different aspects of the intervention, but is a small enough number of questions to not get overwhelmed.
  • open questions (not yes/no answers).
  • are specific enough to help focus the evaluation, but broad enough to be broken down further into more detailed questions to guide data collection.

Work with primary intended users of the evaluation to develop an agreed list of key evaluation questions.

Being clear about the intended use of the evaluation and the type of evaluation needed, can help with developing appropriate Key Evaluation Questions.

The following typology can be used to classify the type of evaluation and typical questions.

Type of evaluation Types of questions asked

Needs analysis

What is needed?  What are unmet needs?

Intervention design

What is the best way to design the intervention?

Monitoring

How is it going? (regular reporting of metrics)

Process evaluation

Is the intervention being implemented according to plan (periodic investigations)?

What has been done in an innovative program?

Outcome / impact evaluation

What results have been produced?

What has (and has not) worked for whom in what circumstances?

Economic evaluation

Has the intervention been cost-effective (compared to alternatives)?

What has been the ratio of costs to benefits?

[Source: Adapted from Owen J with Rogers P (1999). Program Evaluation: Forms and Approaches. Sydney: Allen & Unwin/London: Sage UK.]

These evaluation types are cumulative: outcome / impact evaluation needs data from process evaluation, and economic evaluation requires data from outcome impact evaluation.

The level of existing knowledge will also be important in developing appropriate evaluation questions.

When … It is sensible to …

…we know what works & why

…ask if  processes are being followed (describe activities compared to an agreed standard)

…demonstrate value of what is being done (describe outcomes compared to agreed statement of goals and/or needs)

…we don’t know if it works

…look at process & outcomes / impacts (test theory)

…we don’t know which is the best way

…document process & context & compare performance

(outcomes / impacts, efficiency)

…we don’t know what could work

…use action research/learning & share results (ask a series of questions about early indications of success or failure)

Product

  • List of agreed key evaluation questions

Examples

Further information & Resources

You are currently here:
Click here to view full menu

Comments

There are currently no comments. Be the first to comment on this page!

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.