Remembering John Mayne

John_Mayne.jpg

This week we wanted to share and celebrate the important contributions to improving evaluation from John Mayne, a Fellow of the Canadian Evaluation Society and twice recipient of the CES Award for Contribution to Evaluation in Canada.

We were saddened to hear that John passed away recently.  His open and thoughtful approach to improving evaluation was evident in his personal manner and in his practice of sharing work-in-progress and inviting feedback – his paper on improving theories of change is currently in version 5.  And this openness meant that his work was informed by a rich understanding of the challenges and opportunities for improving evaluation and making a significant difference. 

The Canadian Evaluation Society has paid tribute to John's contributions to theory, practice and the evaluation community in its announcement. and invited people to share their memories and stories of John and his work on that page.

Here are three areas where John’s work leaves an important legacy to build on for improving evaluation. 

1.How do we know if programs have made a difference?

Almost all evaluations need to be able to answer causal questions.  John developed contribution analysis - a systematic approach to determining whether or not an intervention (a project, program, policy etc) made a difference.

Many evaluations examine whether an intervention “made a difference” by comparing what happened to an estimate of what would have happened without the intervention (the counterfactual), usually through a randomly assigned control group or a similar comparison group. But it is not always possible to create or find a credible counter-factual.  And the focus on answering the question “Does it work?” is not so useful for interventions which only work in certain situations and/or for certain groups of people.

Contribution analysis explicitly recognises the other factors which combine with most interventions to produce the intended effects.  It assesses whether an intervention made a difference in terms of it being a necessary part of a package of causes which together brought about or contributed to the important changes.  In addition to answering the question of whether an intervention made a difference, it also answers the question “How and why has the intervention (or component) made a difference, or not, and for whom?”. Initially developed in 2001, contribution analysis was initially used to improve monitoring systems and then broadened to guide evaluation design.

John set out the iterative processes of developing and improving a contribution analysis as follows in his 2011 book chapter:

 

Table from Mayne, J. (2011). Contribution analysis: Addressing cause and effect. In R. Schwartz, K. Forss, & M. Marra (Eds.), Evaluating the complex (pp. 53–96). New Brunswick, NJ: Transaction Publishers. Reproduced in 2019 open-access paper

Resources:

Revisiting Contribution Analysis – open-access version of December 2019 paper by John providing an overview of recent developments in contribution analysis

Contribution Analysis – 45 minute video presentation by John in 2015

Contribution analysis – BetterEvaluation approach page which sets out the key features and has links to key resources

2.How can we improve theories of change?

Another of John’s contributions was exploring ways of improving theories of change.  In his 2019 paper “ A brief on contribution analysis: Principles and concepts”, John set out the following criteria for a robust theory of change:

For a structurally sound ToC:

  1. Is the ToC understandable? Are there pathways of results, and causal link assumptions set out? Is there a reasonable number of results?

  2. Are the ToC results and assumptions well defined?

  3. Is the timing sequence of results and assumptions plausible?

  4. Is the ToC logically coherent? Do the results follow a logical sequence? Are the causal link assumptions pre-events and conditions for the subsequent effect? Is the sequence plausible or at least possible?

  5. Are the causal link assumptions necessary or likely necessary?

  6. Are the assumptions independent of each other (recognizing that some assumptions may apply for more than one causal link)?

For a structurally sound ToC that is plausible:

  1. Is the ToC generally agreed?

  2. Are the results and assumptions measureable, or at least key results and assumptions? What is the likely strength or status of evidence?

  3. Are the causal link assumptions likely to be realized? Are at-risk assumptions mitigated through confirming or corrective actions?

  4. Are the sets of assumptions for each causal link along with the prior causal factor plausibly sufficient to bring about the effect?

  5. Is the level of effort (activities and outputs) commensurate with the expected results?

  6. To what extent are the assumptions sustainable?

Text from Mayne, J. (2019) A brief on contribution analysis: principles and concepts
 

Given the importance of other factors to producing intended outcomes and impacts, it is important for theories of change to be able to represent these effectively.  Sometimes these are listed in a box of assumptions, but this minimises their significance.

John discussed ways of improving theories of change in a 2019 paper on Developing Useful TOCs which explored ways of showing the other factors that contribute to intended outcomes and impacts, and also the value of drawing on a generic change theory (Michie and others’ Capacity-Opportunity-Motivation-Behaviour model).

 
 

He proposed the following steps for developing a theory of change:

First steps

  • Identify the barriers to change
  • Identify the various actors involved in the intervention
  • Identify key pathways to impact
  • Identifying intervention activities

Using the COM-B model to develop ToCs

  • Identifying desired behaviour and capacity changes
  • Generating and assessing causal link assumptions
  • Building credible causal narratives
  • Actor-based ToCs
  • Writing out a ToC in text

John suggested developing a number of different theories of change for a single intervention, to serve different purposes:

  • A Narrative ToC is the elevator pitch version of the ToC, set out in text which describes broadly how the intervention is intended to work. It might typically set out in a few words the key pathways of the intervention and then describe the intended key results. It should only be a sentence or two.
  • An Overview ToC could just be a simplified impact pathway showing as relevant any sub or nested theories of change pathways, OR a ToC showing the main pathways to impact, along with, in either case, the rationale assumptions. The concept is to capture the big picture.
  • More detailed Nested ToCs, showing the several impact pathways and the causal link assumptions details that support the theory of change, or a series of detailed nested pathway ToCs for each main pathway to impact. 

Resources

A brief on contribution analysis: Principles and concepts 2019 paper which outlines criteria for a good TOC

Developing Useful TOCs 2019 version 5 of paper outlining a range of strategies for improving TOCs

Sustainability Analysis of Intervention Benefits: a Theory of Change Approach A 2020 paper that addresses how a theory of change can articulate what will be needed to sustain the outcomes achieved by an intervention.

‘Good’ Theories of Change. 90 minute video of 2017 presentation by John at the Evaluation Centre for Complex Health Intervention, Toronto.

3.How can we improve the commissioning of impact evaluations?

A third area of John’s work was on improving the match between the questions that would be most useful for an impact evaluation to answer and the questions that are possible to answer given the methods used and the available resources.

He argued that impact evaluations that tried to focus on a single causal factor were not likely to be useful for most interventions, where multiple factors combined to produce outcomes and impacts.  Instead impact evaluations should be framed around questions such as:

  • Did the intervention contribute to observed impacts?
  • How and why did the intervention make that contribution?
  • What other causal factors were at play?
  • What was the relative importance of the various causal factors?
  • Are the results achieved sustainable?
  • Will the intervention work elsewhere?
  • Can it be scaled up?
  • What are lessons learned?
  • What is the likely future impact of the intervention?

Resources:

Assessing the Relative Importance of Causal Factors A 2019 Centre for Development Impact paper by John that explores ways of answering the question “How important were the intervention's efforts in bringing about change in comparison to other factors?”

Realistic Commissioning of Impact Evaluations: Getting What You Ask For? in Paulson, A. and Palenburg, M. (eds) (2020) The RealPolitik of Evaluation: Why Demand and Supply Rarely Intersect (Taylor Francis). Book is not open access, link directs to chapter abstract.

 

'Remembering John Mayne' is referenced in: