Most evaluations need to investigate what is causing the outcomes and impacts of an intervention. (Some process evaluations assume that certain activities are contributing to intended outcomes without investigating these).
Sometimes it is useful to think about this in terms of ‘causal attribution’ – did the intervention cause the outcomes and impacts that have been observed? In many cases, however, the outcomes and impacts have been caused by a combination of programs, or by a program in combination with other factors.
In such cases it can be more useful to think about “causal contribution” – did the intervention contribute to the outcomes and impacts that have been observed?
One strategy for causal inference is to check that the data are consistent with what we would expect if the intervention were being effective? This involves not only whether or not results occurred, but their timing and specificity.
Another strategy is to assess the impact of an intervention is to compare it to an estimate of what would have happened without the intervention. Options include the use of control groups, comparison groups and expert predictions.
A third strategy is to identify other factors that might have caused the impacts and see if it is possible to rule them out.
Recorded webinar: Jane Davidson's overview of options for causal inference in a 20 minute webinar in the American Evaluation Association's Coffee Break series. Free to all, including non-members.