Most evaluations require ways of addressing questions about cause and effect – not only documenting what has changed but understanding why.
Impact evaluation, which focuses on understanding the long-term results from interventions (projects, programs, policies, networks and organisations), always includes attention to understanding causes.
Understanding causes can also be important in other types of evaluations. For example in a process evaluation, there often needs to be some explanation of why implementation is good or bad in order to be able to suggest ways it might be improved or sustained.
In recent years there has been considerable development of methods for understanding causes in evaluations, and also considerable discussion and disagreement about which options are suitable in which situations.
When choosing between these different options, consider the different types of causal inference that might be involved:
One cause producing one effect – it is necessary and sufficient to produce the effect
Two or more causes combining to produce an effect (for example, two programs or a program when combined with other factors such as particular participant characteristics) – one of the causes alone is necessary but not sufficient
Two or more causes being alternative ways of producing an effect – either of them are sufficient and neither is necessary
Different labels might be used for these different types of causal relationship - ‘causal attribution’ implying a single cause, ‘causal contribution’ implying a package of causal factors, and ‘causal inference’ being used to refer to all of these.
It is also important to consider the different types of questions that might be asked about cause and effect:
Did the intervention make a difference?
For whom, in what situations, and in what ways did the intervention make a difference?
How much of a difference did the intervention make?
To what extent can a specific impact be attributed to the intervention?
How did the intervention make a difference?
To explore the different ways of understanding causes in an evaluation, download the overview which lists different methods, designs, processes and approaches. You can also explore the following three broad strategies for causal inference.
This strategy should be part of all evaluations that include causal questions. There are a number of options and approaches that can be used to check that the data are consistent with what would be expected if the intervention were contributing to producing the observed changes.
This strategy is appropriate in some but not all evaluations. There are a number of options and approaches that can be used to develop a counterfactual - an estimate of what would have happened without the intervention - and to compare that to the findings of what happened with the intervention.
This strategy should be part of all evaluations that include causal questions. There are a number of options and approaches that can be used to identify other factors that might have caused the impacts and to see if it is possible to rule them out.
Recorded webinar: Jane Davidson's overview of options for causal inference in a 20 minute webinar in the American Evaluation Association's Coffee Break series. Free to all, including non-members.
Models of causality and causal inference. Paper by Barbara Befani discussing different ways of thinking about causality and investigating cause and effect.
Making causal claims. Paper by John Mayne on the logic involved in thinking about multiple contributing factors to produce results.