Contribution analysis

Available languages

Contribution analysis is an evaluation approach that provides a systematic way of understanding an intervention's contribution to observed outcomes or impacts.

Overview

Contribution analysis provides a systematic way of understanding the contribution an intervention (such as a project, program, portfolio, policy or advocacy campaign) has made to observed results (outcomes or impacts). It involves developing or drawing on a reasoned, plausible causal theory of how change is understood to come about. The process includes assessing whether existing and additional evidence is consistent with this theory of change, revising the theory of change to better incorporate other contributory factors, and identifying and ruling out, where warranted, alternative explanations in order to understand an intervention's actual contribution.

A contribution analysis report provides evidence and a line of reasoning to draw a plausible conclusion to documented results. It provides a level of confidence regarding the nature and level of importance of this contribution. It also provides an increased understanding of how, why, and for whom the observed results have (or have not) occurred and the roles played by the intervention and other factors.

Steps​ in the process

The creator of the approach, John Mayne, set out six steps for contribution analysis, which are better understood as a set of iterative steps (as shown in this diagram from Apgar et al., 2020)

Diagram showing siz steps of contribution analysis where the steps go around in a loop from step one through to six. Step six then feeds back into steps two and four as described in the text below

Source: Apgar et al. 2017

 

1. Set out the attribution problem to be addressed

The type of cause-effect question that is being asked in the evaluation needs to be determined. Contribution analysis explicitly recognises that change at the level of outcomes and impacts occurs due to a combination of factors, known as a causal package. An intervention might contribute to this package but will not be the sole factor producing change. Rather than answering questions like “Did the intervention cause the observed change?”, contribution analysis is appropriate for answering questions such as:

  • Is it reasonable to conclude that the intervention contributed to the observed changes?

The required level of confidence needs to be determined, and this should be based on the kinds of decisions the evaluation will inform and the needs of its primary users.

The type of expected contribution an intervention makes to specific changes needs to be explored in terms of its nature and extent. This includes an exploration of:

  • Which results would the intervention be expected to directly influence (typically results at the level of immediate outcomes)?
  • Which results might the intervention be expected to indirectly influence (typically at the level of later outcomes and impacts)?

Other key influencing factors need to be identified and explored, including their likely significance.

To complete this step, an initial assessment is made of the plausibility of the expected contribution in relation to the size of the intervention and the complexity of influencing the result of interest. Given the size of the intervention, the magnitude and nature of the problem and the other influencing factors, is an important contribution by the intervention likely? If a significant contribution by the intervention is not plausible, there might not be value in completing a contribution analysis.

2. Develop (or review and revise if needed) a theory of change and risks to it

Develop the program logic/results chain describing how the intervention is supposed to work. Identify the main external factors that might account for the observed outcomes. Based on the results chain, develop the theory of change upon which the intervention is based, including articulating the causal links between results, any underlying assumptions about how change will come about, and the risks to achieving the conditions under which the intervention will work. This theory of change should lead to a plausible association between the intervention’s activities and the outcomes sought. It is important to determine how much of the theory of change is contested, by whom, and why, with different perspectives on how it works, and which links in the causal chain are generally well understood and accepted.

3. Gather existing evidence on the theory of change

A review of the theory of change will identify where evidence is most needed. This might include examination of:

  • What evidence (such as information from performance measures and evaluations) is currently available about activities and various results?
  • What evidence currently exists on the causal links that connect each result to the next, and about the assumptions about the conditions under which these causal links will work?
  • What evidence exists about the other influencing factors that have been identified and the contribution they may be making?

4. Assemble and assess the contribution story and challenges to it

With this information, the evaluation team and stakeholders can assemble and critically assess an initial contribution story. This might include:

  • Which links in the causal chain are strong (good evidence available, strong logic, or wide acceptance) and which are weak (little evidence available, weak logic, or little agreement among stakeholders)?
  • How credible is the contribution story overall? Do stakeholders agree with the story, given the available evidence?
  • Who does or does not agree, and why? Do they agree that the intervention has made an important contribution (or not) to the observed results?
  • Where are the main weaknesses in the story, and where would additional evidence be most useful?

5. Seek out additional evidence

This step involves identifying what new data are needed, adjusting the theory of change if needed, and gathering more evidence. This can involve primary data collection such as:

  • Interviews, focus groups, surveys and case studies
  • Analysis of variations in implementation over time and across locations
  • Detailed evaluation of a particular component of the program where existing data is weak
  • and synthesis of evidence from research and evaluations.

6. Revise and, where the additional evidence permits, strengthen the contribution story

With the new evidence, you should be able to build a more substantive and credible story that a reasonable person will be more likely to agree with. It will probably not be foolproof, but the additional evidence will have made it stronger and more plausible.

At this point, step 4 might be revisited, critically reviewing the strength of the revised contribution story and gathering additional information as needed, or step 2 might be revisited, revising the theory of change.

The contribution story might be judged to be sufficiently developed if:

  • There is a reasoned theory of change for the intervention: the key assumptions behind why the intervention is expected to work make sense, are plausible, may be supported by evidence and/or existing research, and are agreed upon by a range of key players.
  • The intervention activities were implemented as set out in the theory of change.
  • The theory of change—or key elements thereof— is supported by and confirmed by evidence on observed results and underlying assumptions—the chain of expected results occurred. The theory of change has not been disproved.
  • Other influencing factors have been assessed and either shown not to have made a significant contribution or their relative role in contributing to the desired result has been recognised

Later evaluators (Wimbush et al., 2012) added a seventh step – to use contribution analysis to support organisations or networks to review their strategies and tactics, learn from what they have done, and make improvements.

Causal Pathway Features

How this approach might be used to incorporate features of a causal pathways perspective

A causal pathways perspective on evaluation focuses on understanding how, why, and under what conditions change happens or has happened. It is used to understand the interconnected chains of causal links which lead to a range of outcomes and impacts. These causal pathways are likely to involve multiple actors, contributing factors, events and actions, not only the activities associated with the program, project or policy being evaluated or its stated objectives.

Contribution analysis can be used in ways that incorporate the following features of a causal pathways perspective:

  • Valuing actors’ narratives: As part of the iterative process of investigating causal links, actors’ narratives should be included as part of the evidence used to develop, refine and test the theory of change.
  • Paying attention to a range of outcomes and impacts: In the process of testing the original theory of change, other interventions and pathways may be identified that also contribute to the outcome of interest, and related to those pathways, new intermediate outcomes that might merit exploration.
  • Contextual variation: Identifying and describing differences in how interventions work in different contexts and for different people - To thoroughly test a theory of change and develop a robust contribution story, an evaluation needs to take into account how different people in different contexts experience the intervention differently. This requires a thorough understanding of the distinct contexts in which an intervention has been undertaken, as well as the range of people involved with it. The evaluation needs to collect data from each of these sources and reflect each perspective in their analysis.
  • Iterative, bricolage approach to evaluation design: Using data to inform subsequent data collection and analysis. Contribution analysis uses iterative evaluation design, beginning by identifying and analysing existing data and then designing further data collection and analysis to fill gaps.
  • Drawing on a range of causal inference strategies: Contribution analysis can use a wide range of causal inference methods, not limiting these to any particular ones. It uses the logic of checking the consistency of the evidence with the theory of change and looking for and ruling out alternative explanations or adjusting the theory of change to include other contributory factors.
  • Taking a complexity-appropriate approach to evaluation quality and rigour: Contribution analysis uses triangulation of results using different methods and sources. Its iterative, focused evaluation design maximises the use of existing evidence, saving additional data collection to explore causal links where this will be most useful.

Background

History of this approach

Contribution analysis was originally developed by John Mayne, a Canadian evaluator initially working with the federal auditor-general, as a way of addressing the implied attribution when performance indicators related to theories of change are reported. It was later expanded to be used in evaluation. More recently, there has been work demonstrating how contribution analysis can be combined with specific methods for causal inference, such as process tracing (Befani & Mayne, 2014), or with Bayesian Confidence Updating (Punton & Barnett, 2018) to provide a specific and transparent approach to assessing the strength of evidence underpinning a theory of change

Methods that are part of this approach

BetterEvaluation defines an approach as a systematic package of methods. The Rainbow Framework organises methods in terms of more than 30 tasks involved in planning, managing and conducting an evaluation. Some of the methods used in Contributions Analysis and the evaluation tasks they relate to are:

Example

Contribution analysis was used in an impact evaluation that examined the contribution of two forestry research centres ( the Centre for International Forestry Research (CIFOR) and the Centre de Coopération Internationale en Recherche Agronomique pour le Développement (CIRAD), to sustainable forest management (SFM) practices in the six countries in the Congo Basin. A detailed theory of change was developed through stakeholder interviews, consisting of five different contribution pathways, or ways in which research activities could plausibly contribute to the intended impacts.

The theory of change also recognised the importance of the efforts of international NGOs in achieving these impacts.

To test the theory of change, the evaluation team gathered evidence from 65 stakeholder interviews, reviewed 130 documents, including previous evaluations and bibliographic studies, and conducted three case studies – a country case study (Cameroon) and two research area cases (forest management and forest products other than timber).

The evaluation found that the most significant contributions of the research centres were through contributions to the national management standards (which establish how the regulatory frameworks are implemented) and to the certification criteria that apply to timber companies. The credibility of the revised contribution story was undertaken in three separate ways: the evaluation Steering Group reviewed all working documents; the first case study was reviewed by several researchers, and the final report was also reviewed; and a detailed review was undertaken by a reviewer who examined the credibility of key findings, identified and discussed questionable points, and reviewed the evidence base for findings.

(Note. This Example was adapted from "Making rigorous causal claims in a real-life context: Has research contributed to sustainable forest management?" by T. Delahais & J. Toulemonde. Evaluation 23(4), 370-388. Copyright 2017.)

Advice for choosing this approach

What types of projects and programs would contributions analysis be appropriate for?

Contribution analysis is particularly useful in situations where an intervention has been implemented at scale or in situations of complex change (such as an advocacy campaign aimed at influencing national-level policy change or a landscape-level program), in which multiple factors and actors have influenced a desired outcome over time.

What types of evaluations is contributions analysis appropriate for?

Contribution analysis is appropriate for evaluations that are asking causal contribution questions:

  • Has the intervention influenced the observed result?
  • Has the intervention made an important contribution to the observed result?
  • Why has the result occurred?
  • What role did the intervention play within a broader causal package?
  • Is it reasonable to conclude that the intervention has made a difference?
  • What does the preponderance of evidence say about the degree to which the intervention is making a difference?
  • What conditions are needed to help this type of intervention influence the outcome of interest?

It is not suitable for answering causal questions such as:

  • Has the intervention caused the outcome?
  • To what extent, quantitatively, has the intervention caused the outcome?

What resources are needed?

Contribution analysis requires adequate time for developing and implementing an iterative evaluation design, including analysing existing data and collecting a substantial amount of additional data.

Advice for using this approach effectively

Critical issues

Working through the steps systematically and iteratively will ensure that available evaluation resources are best directed to focus on causal links of most importance and uncertainty.

Using an explicit causal inference method, such as process tracing or multiple lines and levels of evidence, is important for strengthening the contribution story.

Resources

Guides

Examples

Discussion papers about methods

Apgar, M., Hernandez, K. and Ton, G., 2020. Contribution analysis for adaptive management. Briefing Note.

Befani, B. & Mayne, J. (2014). Process Tracing and Contribution Analysis: A Combined Approach to Generative Causal Inference for Impact Evaluation. IDS Bulletin, Special Issue: Rethinking Impact Evaluation for Development by Barbara Befani, Chris Barnett and Elliot Stern. Volume 45, Issue 6, pages 17–36. Retrieved from: http://onlinelibrary.wiley.com/doi/10.1111/1759-5436.12110/abstract

Delahais, T., & Toulemonde, J. (2017). Making rigorous causal claims in a real-life context: Has research contributed to sustainable forest management? Evaluation 23(4), 370-388.

Kane, R., Levine, C., Orians, C. & Reinelt, C. (2017). Contribution analysis in policy work: Assessing advocacy’s influence. Washington DC: Center for Evaluation Innovation. https://www.evaluationinnovation.org/wp-content/uploads/2017/11/Contribution-Analysis_0.pdf

Mayne, J. (2001). Addressing Attribution Through Contribution Analysis: Using Performance Measures Sensibly. Canadian Journal of Programme Evaluation, Vol 16, 1-24.

Punton, M. & Barnett,  (2018). Contribution analysis and Bayesian Confidence Updating. Itad briefing paper.

Wimbush, Erica, Montague, Steve, and Mulherin, Tamara (2012). Application of contribution analysis to outcome planning and impact evaluation. Evaluation, Vol 18:3, 310-329.

Last updated:

Expand to view all resources related to 'Contribution analysis'

'Contribution analysis' is referenced in: