Using logic models and theories of change better in evaluation

Patricia Rogers's picture 19th May 2017 by Patricia Rogers

Many evaluations include a process of developing logic models and theories of change – an explanation of how the activities of a program, project, policy, network or event are expected to contribute to particular results in the short-term and longer-term.  They have been used for many years -  versions can be seen in Carol Weiss’ 1972 book "Evaluation research: methods for assessing program effectiveness" -  and they have been mainstreamed in many organisations as an essential component of planning and evaluation under various labels, including program theory, programme theory, intervention logic, investment logic, outcomes hierarchy.

However, their full potential is often not met, as many people seem to think that the basic version they know is all there is, even when it doesn't really meet their needs. 

We’ve explored some of these issues in previous blogs and events and so thought it would be useful to group some of these resources together under some of the common challenge areas people have with theory of change:

Developing a theory of change

The process of developing a theory of change doesn’t have to only involve a group of people writing on sticky notes.  It’s often important to bring in information from research, previous evaluations, and the perspectives of those with lived experience of the program or the situation it is intended to address.  And it’s important to actually have a theory – an explanation of HOW you expect activities to contribute to the intended results.  

Blogs

Having a theory in the theory of change (52 weeks of BetterEvaluation: Week 10) 

One of the common problems in using theory of change is not actually having a theory. This blog addresses this issue.

How do I choose a suitable theory of change? (BetterEvaluation FAQ)

It's important to make sure that the theory of change actually has a theory about how change will come about  - not just some boxes with arrows between activities and outcomes/impacts. So how do you choose which change theory to use?

Having an adequate theory of change (52 weeks of BetterEvaluation: Week 12)

Projects and programs that are based on an inadequate theory of change are less likely to be effective as plans and activities will not cover everything that needs to be done, and projects will be implemented when there is little chance of success. 

Options on BetterEvaluation: Processes for developing a programme theory

  • Articulating mental models: talking individually or in groups with key informants (including programme planners, service implementors and clients) about how they understand an intervention works.
  • Backcasting: working backward from a desirable future, to the present in order to determine the feasibility of the idea or project.
  • Five Whys: asking questions in order to examine the cause-and-effect relationships that create underlying problems.
  • Group model building: building a logic model in a group, often using sticky notes.
  • Previous research and evaluation: using the findings from evaluation and research studies that were previously conducted on the same or closely related areas.
  • SWOT Analysis: reflecting on and assessing the Strengths, Weaknesses,Opportunities and Threats of a particular strategy in order to discover how it can best be implemented.

Representing a theory of change

A theory of change doesn’t have to only be in the form of a pipeline of:

inputs -> activities -> outputs -> outcomes -> impacts

It can often be more useful to represent a theory of change in the form of a sequence of results where activities can occur along the chain ( an outcome hierarchy) or a triple column/row version which shows activities and other factors visually. And there are lots of useful technologies that can be used.  

Blogs

Q & A about drawing logic models (52 weeks of BetterEvaluation: Week 3)

This blog discusses the work of Carol Weiss, who showed how useful it can be to focus on teasing out the different possible causal paths between program activities and its outcomes, and gives some examples and tips for drawing logic models.

Defining what needs to be evaluated (52 weeks of BetterEvaluation: Week 20)

In 2013, Simon Hearn presented the second of eight AEA Coffee Break webinars, introducing the DEFINE component of the BetterEvaluation Rainbow Framework. This blog responds to the many questions that were asked by the participants - in particular there was a great question about non-linear logic models.

Options on BetterEvaluation: Ways of representing programme theory in a logic model

  • Tiny Tools Results Chain: mapping both positive and negative possible impacts from an intervention
  • Logframe: designing, executing and assessing projects by considering the relationships between available resources, planned activities, and desired changes or results.
  • Outcomes hierarchy (also known as a theory of change or an outcomes chain): showing a series of outcomes leading up to the final impacts of a project.
  • Realist matrix: focusing on one of the steps in an outcomes chain and then identifying the mechanism involved in producing the outcome and the contexts within which this mechanism operates.
  • Results chain (also known as a ‘pipeline model’): showing a programme as a series of boxes inputs-> activities-> outputs -> outcomes -> impacts
  • Triple column: showing an outcomes hierarchy in the central column

Using a theory of change

A theory of change is often used for planning a program or project, developing a clearer and more plausible plan but sometimes its benefits for monitoring and evaluation are not realised.  

Here are some ways to use it:

  • Guide data collection by focusing on what is needed in terms of measures, indicators or metrics of intended outcomes.
  • Identify which outcomes are likely to be evident during the life of the evaluation
  • Identify other sources of evidence that can support later causal links - for example, early childhood programs are often evaluated well before the effects on children's education and employment can be seen, but these evaluations can draw on evidence from research and evaluations about the likely positive impacts of improving literacy, secure attachment and emotional intelligence.
  • Explain whether failure to achieve intended results is due to implementation failure or to theory failure - by connecting information about processes with information about results across cases or sites
  • Strengthen causal inference by identifying evidence that is either consistent with or challenges the theory of change
  • Support generalisation by identifying what works for whom in what context
  • Support synthesis across different studies with a common theory of change

Resources

When and how to develop an impact-oriented monitoring and evaluation system

Many development programme staff have had the experience of commissioning an impact evaluation towards the end of a project or programme only to find that the monitoring system did not provide adequate data about implementation, context, baselines or interim results. This Methods Lab guidance note by Greet Peersman, Patricia Rogers, Irene Guijt, Simon Hearn, Tiina Pasanen and Anne L. Buffardi has been developed in response to this common problem.

Causal inference for program theory evaluation

This blog by E. Jane Davidson and Patricia Rogers discusses how program theory can support analysis of causal attribution and contribution even when there is not a credible counterfactual.  (See the BetterEvaluation page on Understanding Causes for more information on these options)

Need more support?

For those of you looking for more support or advice for using theories of change, we're offering BetterEvaluation members the chance to submit a question or challenge that they have in relation to creating or using theory of change for review by the BetterEvaluation team.

We'll be selecting a number of these questons to be de-identified and answered by Patricia Rogers in next newsletter's blog. Contact us privately with your question, or leave it as a comment below.

Patricia Rogers will also be exploring these issues in the next few months in two courses on advanced use of program theory  - TEI Washington, during their 2017 July Program, and ANZSOG Melbourne, on August the 16th.

And of course, as always, we welcome your thoughts, suggestions, and resources on this topic, so if you have something to share, let us know

A special thanks to this page's contributors
Author
Director of BetterEvaluation/ Professor of Public Sector Evaluation, Australia and New Zealand School of Government.
Melbourne.

Comments

Doessegger's picture
Dössegger

There's a recent resource reflecting the development of a logic model: http://www.zfev.de/aktuelleAusgabe/abstracts/04%20Dössegger%20et%20al.pdf

[Article in German, English abstract below] 

Program models have become an increasingly used evaluation instrument in theory-based evaluation approaches. However, many questions concerning adequate procedures for developing and using program models in evaluation remain open. In part, this problem could be due to a lack of sufficiently detailed descriptions of specific case examples. After a short overview of the role of program models in the evaluation literature, the development of a program model of the Swiss ‘youth and sports’ program is described in detail. The development was simultaneously based on literature review and stakeholder input. The experiences made during this procedure are reflected from three perspectives: commissioners, model developers, and evaluation theory. A particular focus concerns three often-mentioned challenges in using program models in evaluation: Are program models blind for unintended outcomes? Should program models reflect stakeholders’ views or theories? Is the effort developing a model warrantable?

 

Anonymous's picture
Julian

I think there's an additional use for theories of change that is worth listing and should perhaps even go at the top of the list: support evaluative reasoning by providing a point of reference for systematically identifying criteria of merit and worth. 

Sandwiching rubric development in between the theory of change and selection of metrics and methods in the sequence of evaluation design, helps ensure validity of indicators. The values embedded in the rubric cohere with and elaborate upon the theory of change, and the metrics cohere with the content of the rubric. 

Anonymous's picture
David M. Fetterman

Hi

I am working on an empowerment evaluation effort in the Oakland Unified Public School system (in California).  We are using a combination of empowerment evaluation and action research.  I have found that the coaches and teachers are developing an iterative and emergent theory of change.  It began with a simple premise: if we share out truth things/behavior will change.  Quickly, the group recognized that was not really a theory of change and that they had to identify mechanisms and links that change behavior.  They are creating a series of links and desired behavior changes, but my point is that sometime these theories have to emerge out of practice to be owned and useful.  

Anonymous's picture
Héctor Tuy

Danke! Alain

bob williams's picture
Bob Williams

I'm not denying that the common models evaluators use in expressing theories of change are inherently wrong, but I do recall a panel I was on at the American Evaluation Association conference about 15 years ago called "Is there anything more to say about logic models".  My contribution was called 'The conversation has barely started'.  Fifteen years later what amazes me is how little that conversation has shifted.  We've added a few twiddly bits in the name of complexity but not really engaged with the fact that this notion can, in some interpretations, challenge the very notion of ToC.    As I said then and I say today, the ways in which the logic of an intervention can be explored and expressed are myriad.  Yet after all these years we still have those little boxes connected by their little, largely unidentified, arrows.  From time to time other people have suggested fundamentally different ways of expressing the logic of interventions.  I've tried to introduce a few from the systems field, Michael Scriven and Jane Davidson promoted a more argumentative way of seeking to understand the logic of an intervention, concept mapping had a short period in the sun but never really took off and for a while Rick Davies made a sterling but ultimately unsuccessful attempt to argue that network analysis was an alternative way of expressing an intervention's logic.  These are just a few examples of the range of ways in which the theory of an intervention can be expressed beyond variations of the four box or outcome models.  Yet rather than explore the possibilities of these, evaluators appear content to endlessly tinker with and argue over just one.  

My question is not whether these tinkerings or arguments are 'good' but how 'professional' they are.  As a craft, over the years we have found a really good way of building a particular kind of table .... but what happens when we encounter different eating habits or alternative ways of displaying the family photos?  If we have aspirations to be a profession, then part of that responsibility is constantly to seek out alternative and perhaps better options for those who use our services, not just sell repainted or reconditioned versions of the same product we've been using for thirty years.  Especially when there is no shortage of alternatives available.  Have we made the fundamental mistake of confusing what is essentially a bunch of methods (logic models) with a methodology (ToC)?  Certainly much of what I read in many evaluation discussion groups would suggest that we have.

 

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.