52 weeks of BetterEvaluation: Week 21: Framing the evaluation

By
Patricia Rogers

What's one of the most common mistakes in planning an evaluation? Going straight to deciding data collection methods. Before you choose data collection methods, you need a good understanding of why the evaluation is being done. We refer to this as framing the evaluation.

This idea of framing is common to other design processes as well. Before designing a house, an architect takes time to develop a brief. Who will use the house? What are their needs? Are there existing resources that can be used? Are there significant constraints? We need to do the same thing with an evaluation. Who will use the evaluation? What will they use it for? What do they need to know?

When I am asked to review an evaluation plan, I often find that it goes straight from a description of the project to a set of data collection tools or metrics. This is the sort of feedback I give to help them to address the issue of framing.

Addressing issues of framing

1. Be very clear about the primary intended users and primary intended uses of the evaluation.

Who will use it and what will they do with it? You might well have a number of these and you'll need to think about whether one evaluation will meet different needs or whether you need different components. There will also be secondary users (e.g. other people in other projects who might be interested) but if you're not prepared to take responsibility for working with them to actually use the evaluation, they're not your primary intended users.

If possible try to engage these intended users in the evaluation planning to be sure it will be relevant and credible.

Check out the following resource for more suggestions on how to do this.

  • Identifying the intended user(s) and use(s) of an evaluation: This guideline from the International Development Research Centre (IDRC) highlights the importance of identifying the primary intended user(s) and the intended use(s) of an evaluation and outlines a variety of methods that can be used to achieve this in the initial planning stage.

These task pages provide suggestions for different ways of doing this:

2. Identify a small number of key evaluation questions - the high level questions the evaluation is intended to answer from various sources.

Check these out with your primary intended users and get a clear list of priority questions. Then draw up a matrix showing what data will be used to answer them - including existing data. This will help you to prioritise additional data collection, help with analysis and reporting.

It is useful to distinguish between different types of questions.

Type of question Example Where to find methods for answering this type of question
Descriptive

What has happened?

Causal

Has the program produced the changes we’ve observed?

Evaluative

Overall, how good was the program?

Action

What should we do?

 

An evaluation matrix sets out each key evaluation question and identifies how it will be answered. The resource "Evaluation ToR guidelines" developed by World Vision on developing a terms of reference for an evaluation, discusses using an evaluation matrix.

Find resources and examples of key evaluation questions on the Decide key evaluation questions page.

3. Clarify what success would look like.

Think about this in terms of terms of criteria (the aspects or dimensions) and standards (the level of performance) and also in terms of outcomes/impacts and processes.

Find various options of how to identify and negotiate the values that should underpin the evaluation on the Determine what success looks like page.


You can watch the first webinar, access slides and links to free downloadable tools and get a full overview of the webinar series below:

Frame the boundaries of the evaluation

BetterEvaluation series of AEA Coffee Break webinars


 

Questions from the webinar

1) Can you have multiple intended users?

Yes, there are often multiple intended users of an evaluation, with different information needs.  For example, program staff might want frequent, rapid information about how things are going so they can identify and correct gaps and problems in implementation and identify and repeat effective strategies.  Funders and policy makers might want information near the end of a project to inform decisions about whether to continue a project. 

It might be possible to meet different needs in a single evaluation, or to design a series of connected evaluations that draw on a single data system but provide different reports for different intended users.

2) Is it possible to evaluate a project without having designed an evaluation program going into a project.​ We've been doing the same program for some time and now folks are beginning to ask--should we continue this? etc. 

Yes, while it’s better to plan an evaluation from the beginning, it’s very often the case that the need for an evaluation emerges over time. 

3) Please say a bit more about "broader evidence base" as an intended use.

A lot of resources go into conducting evaluations that are used briefly (if at all) and then discarded.  At the very least, organizations should look at ways of making these reports (and ideally the actual data) available internally to inform future use by others who might not be involved initially but would find the reports useful.  Even better is to make these publicly available so that people outside the organization can learn from this evidence.  There is some more discussion of this particular types of use under the task Synthesize evidence from more than one evaluation. (add link)

4) Do you suggest that a stakeholder analysis is conducted and a matrix prepared before going ahead on the questions the evaluation is being planned to answer?

Yes, it would be helpful to do this formally.

5) ​Do you see any difference between descriptive questions and real evaluative questions?

I see this as a cascade.  In order to answer evaluative questions (such as “Is this good?” “Is this better?”), we need to answer normative questions (“What does success look like?”), descriptive questions (“What happened?”), and causal questions (“What caused the observed changes?).

6) Does the fra​me change with the context of evaluation or any other consideration such as cultural considerations?

Cultural considerations are important in evaluation. They can affect how we collect data, how we present findings and how we manage the evaluation.  A useful resource is the American Evaluation Association’s Statement on Cultural Competence in Evaluation which discusses the implications of cultural differences on what is valid and ethical evaluation.

7) Often the goal of evaluation is defined as a) accountability and b) learning. Do you see any tension between these two goals?

There can often be a tension between them – especially if accountability is poorly understood and implemented. For example, a poor accountability system will penalize a program for not implementing activities as planned, even if the reason for the change was learning about more effective ways to operate. There is a useful paper by Irene Guijt which outlines some ways to frame these two purposes and reduce the tension.

8) I have a question about the balance between utilisation/intended use and the process of evaluation as purpose of an evaluation. Is it needed to be balanced?

When we’re thinking about intended use, we’re not only thinking about using the findings but also the impact of the process itself.  Bringing people together to build up a better understanding of each other’s’ perspectives, for example, can be a useful outcomes from an evaluation, in addition to any use made of the findings.

9) I​n research often tested analysis frameworks are used.  would you be able to recommend testing evaluation frameworks or using tested frameworks before using?

I think what is needed is an appropriate balance between what has been previously en ary intended users need to know.  A previously used framework might ta collection and analysis before doing it.  Atested and what is been locally appropriate.  So it depends a bit on what you mean by an evaluation framework. 

If you’re talking about data collection processes, then it is very important to test data collection and analysis before doing it.  And using previously developed measures might be a good way to do that.  But you also need to check that these will be seen as relevant and valid by the intended users. 

If you’re talking about an overall evaluation plan, then a plan for another similar project might provide a useful starting point but it will probably need to be adapted, and might need to be completely different, in order to meet the specific needs of the specific intended users.

Resources

'52 weeks of BetterEvaluation: Week 21: Framing the evaluation' is referenced in: