One of the tasks involved in understanding causes is to check whether the observed results are consistent with a cause-effect relationship between the intervention and the observed impacts.
Some of the options for this task involve an analysis of existing data and some involve additional data collection. It is often appropriate to use several options in a single evaluation. Most impact evaluations should include some options that address this task.
Gathering additional data
- Key Informants Attribution: providing evidence that links participation plausibly with observed changes.
- Modus operandi: drawing on the previous experience of participants and stakeholders to determine what constellation or pattern of effects is typical for an initiative.
- Process tracing: focusing on the use of clues within a case (causal-process observations, CPOs) to adjudicate between alternative possible explanations.
- Check dose-response patterns: examining the link between dose and response as part of determining whether the program caused the outcome.
- Check intermediate outcomes: checking whether all cases that achieved the final impacts achieved the intermediate outcomes.
- Check results match a statistical model: comparing results with a statistical model to determine if the program caused the outcome.
- Check results match expert predictions: making predictions based on program theory or an emerging theory of wider contributors to outcomes and then following up these predictions over time.
- Check timing of outcomes: checking predicated timing of events with the dates of actual changes and outcomes.
- Comparative case studies: using a comparative case study to check variation in program implementation.
- Qualitative comparative analysis: comparing the configurations of different cases to identify the components that produce specific outcomes.
- Realist analysis of testable hypotheses: Using a realist program theory (what works for whom in what circumstances through what causal mechanisms?) to identify specific contexts where results would and would not be expected and checking these.
These approaches combine some of the above options together with ruling out possible alternative explanations.
- Contribution Analysis: assessing whether the program is based on a plausible theory of change, whether it was implemented as intended, whether the anticipated chain of results occurred and the extent to which other factors influenced the program’s achievements.
- Collaborative Outcomes Reporting: mapping existing data against the theory of change, and then using a combination of expert review and community consultation to check for the credibility of the evidence.
- Multiple Lines and Levels of Evidence (MLLE): reviewing a wide range of evidence from different sources to identify consistency with the theory of change and to explain any exceptions.
- Rapid Outcomes Assessment: assessing and mapping the contribution of a project’s actions on a particular change in policy or the policy environment.