52 weeks of BetterEvaluation: Week 20: Defining what needs to be evaluated

Simon Hearn's picture 13th May 2013 by Simon Hearn

Whether you are commissioning an evaluation, designing one or implementing one, having - and sharing - a very clear understanding of what is being evaluated is paramount. For complicated or complex interventions this isn't always as straight forward as it sounds, which is why BetterEvaluation offers specific guidance on options for doing this.

On Thursday last week I presented the second of eight webinars hosted with the American Evaluation Association introducing the DEFINE component of the BetterEvaluation Rainbow Framework - the set of tasks to help define exactly what is to be evaluated. This blog presents the recording of the webinar and responds to the many questions that were asked by the participants - in particular there was a great question about non-linear logic models and we've found a few examples to share (including one from another participant) but we'd love to hear from you if you have other examples.

View the webinar below or watch it on YouTube [20 minutes]:

You can read more about the three DEFINE tasks here and can also download a two-page brief based on the webinar.

Questions from the webinar

There were a few questions asked by participants around logic models and a few more general questions: 

Questions about logic models...

1) Can you talk about ways to diagram program theory (e.g., logic model, etc.) that don't just show causal relations as linear and unidirectional relationships?

I mentioned in the webinar that the Outcome Mapping approach helps build non-linear theories of change but it is not so much visual. These two examples demonstrate how people have used Outcome Mapping to enhance their logframes: http://bit.ly/1498GVA & http://bit.ly/1498Egs.

This is something we'd really like to hear from you about. One webinar participant shared their example and we've managed to find a few others, but I'm sure there are more. Leave a comment below and we'll add them to the site. Here's what we have so far:

Evaluation of the Aboriginal Research Pilot Program

One attendee shared this evaluation report following the webinar - see page 14 for a non-linear logic model, and p.13 for the standard pipeline version (thank you to Orsolya Vaska!)

Lifting the lens: developing a logic for a complicated policy

This article presents a standard logic model developed by the lead agency and a non-linear version developed by the evaluators to better describe simultaneous causal strands.

Creating program logic models

This book chapter includes a section on non-linear logic models.

Causal Loop Diagrams: Little known analytical tool

Causal loop diagrams can be a good way to show complicated feedback loops.

2) If you are working on an overall project evaluation, could it be possible to define the theory of change/logic model prior to defining the evaluation questions?

Yes, working through a logic model can help identify evaluation questions. But it also works the other way around - evaluation questions can help clarify the level of detail that is needed in a logic model, and which aspects need to be described in more detail.

3) Are there times when certain logic model options are more useful than others? For example, is there one that's better for evaluating an organizational culture?

Yes. Our experience has been that pipeline models (and to some extent logical frameworks) are not as useful as the outcomes hierarchy or realist matrix options. for representing interventions with complicated aspects (lots of components) or complex aspects (emergent features) or where activities are not all at the start of the causal process but occur throughout. It would depend on the intervention and the mechanisms by which it is expected to contribute to ultimate outcomes.

4) When you have two or more outcomes, how do you do create a log frame that shows different ways of getting to the same goal?

It will depend which format you are using. For example, the DFID framework guidance states that there should only be one outcome, covering all the changes that the intervention is seeking to bring about. To describe multiple strategies for contributing to the outcome, my suggestion would be to define multiple outputs - the effects of your activities. Beyond this I will have to defer to logframe experts - anyone?

5) In the [logframe on] slide 12, in results area (first column) you pass directly from purpose to output. Why you do not use outcomes after purpose?

For this example we were following a basic logframe model described here, but there are many different variations using different terminology. For example, the DFID logframe replaced goal with impact and purpose with outcomes.

6) I think it is better to distinguish the Logframe APPROACH from the logframe matrix which you showed.  All logical modes are variation on cause effect

The logical framework approach is a facilitated process for constructing a logical framework with a group. The logical framework is the product of this but many argue that the process is more useful than the product.

Questions about DEFINE in general...

7) What about use? Shouldn't that drive your design and be part of the issues to consider? If evaluation is not used, then why do it?

Yes, absolutely. This is covered in the FRAME cluster, which is the subject of the next webinar (link to the recording will appear soon).

8) At what point should the stakeholders be engaged in evaluation planning?

It will depend on which stakeholders, what type of evaluation as well many other things. See the task page on understand and engage stakeholders.

9) Does each stage of the Definition process imply use of all the sample strategies? e.g. more than one program theory approach - or 4 under Unintended Consequences.

No, these are all options. Advice is provided on the BetterEvaluation website to help you choose which is more appropriate.

10) Unintended results seem to be described as negative. can they not also be positive

Yes they can. You can often be pleasantly surprised! We've noted your observation that the site currently has an emphasis on options to look for negative results and will redress the imbalance by adding more on positive unintended results.

11) Could you expand a little more on the 6 Hats Thinking process?

Developed by Dr. Edward de Bono, the “Six Thinking Hats” technique is a framework designed to promote holistic and lateral thinking in decision-making and evaluation. Conducted alone or in group meetings, participants – project members, key decision-makers and stakeholders – are encouraged to cycle through different modalities of thinking using the metaphor of wearing different conceptual “hats”.

On the Six Hats Thinking option page you can find an expanded description, an example of it in use, advice for using this option, and resources to guide you in using it.

12) I did not quite understand the use of the 'peak experience'. Please explain more on this.

This option provides a succinct and coherent description of a program, project or policy when it is operating at its best.  This can then be used to develop a logic model (program theory) and to develop an evaluation plan to investigate how often it operates like this and how this can happen more often.

On the Peak experience description option page, you can find further information including an example, advice for choosing and using the option, and useful resources.

 


This blog post is part of a series of eight posts covering the BetterEvaluation Framework and presenting the recordings of eight corresponding webinars hosted by the American Evaluation Association. The full series of posts is below.

1. Using the Rainbow Framework, Irene Guijt 5. Understanding causes, Jane Davidson
2. Defining what needs to be evaluated, Simon Hearn 6. Weighing the data for an overall evaluative judgment, Patricia Rogers
3. Framing the evaluation, Patricia Rogers 7. How can evaluation make a difference?, Simon Hearn
4. Choosing methods to describe activities, results and context, Irene Guijt 8. Manage an evaluation or evaluation system, Kerry Bruce

 

52 Weeks of BetterEvaluation. Click here to view past features

A special thanks to this page's contributors
Author
Research Fellow, Overseas Development Institute.
London.

Comments

There are currently no comments. Be the first to comment on this page!

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.