What is it?
Checking the consistency of results means analysing data in systematic ways to check the extent to which it matches what would be expected if it has worked, in order to understand whether a causal relationship exists between variables. This may involve specific and additional data collection (e.g. key informant attribution) or analysis of existing or descriptive data (e.g. checking dose/exposure patterns, checking the timing of outcomes, comparative case studies). Having a strong logic model or program theory is a foundation for most options. It is advisable to use this strategy in combination with Investigate possible alternative explanations (strategy 3), and in this way seek to understand the intervention's contribution in the context of other contributing factors.
See the full list of option for checking the consistency of results on the Better Evaluation website. This page is recommended background reading before considering options that may be applied to C4D.
C4D and checking the consistency of results
Applying the C4D principles
|In general, the options outlined under this strategy are good options for answering causal questions about C4D, since it is possible to use a combination of options in complicated and complex C4D initiatives. It is best to use this in combination with strategies to rule out possible alternative explanations. In checking the consistency of results, it is important to be attuned to feedback loops (where one or more factors reinforce changes in each other), tipping points (where at some point one, perhaps minor thing builds on cumulative factors over time to create significant change) and other non-linear, complex interactions.|
|This option is more sensitive to context and interconnections than counterfactual options.|
|Several options can be adapted to be more inclusive, engaging and contribute to mutual learning. One option that is explicitly participatory is Collaborative Outcomes Reporting which maps data against the theory of change, and then uses a combination of expert review and community consultation to check for the credibility of the evidence.|
|This option is useful for developing better understandings of causes and changes. (In comparison, counterfactual designs are better for situation where strong hypotheses (theories) are known and need to be tested and proven).|
|There are many practical and feasible options for checking to see that the evidence supports conclusions about attribution or contribution by the C4D intervention to the observed changes. Even very models R,M&E Frameworks and studies could include these options to greatly improve the ability to make clear, evidence-based causal inference.|
Recommended options and adaptations for C4D
(A combination of strategies is usually advisable)
Checking dose-response patterns: this involves examining the link between dose and response to see whether the program caused the outcome. In C4D this could look at whether the amount of engagement the communication activities (exposure to videos, frequency of participation in events etc.) corresponds with the level of changes in variables (such as increases in knowledge, empowerment etc.). This could also involve checking if there has been an increase in the particular issues covered in the communication activities and not in other similar issues (for example, increases in specific types of violence covered compared to issues not covered). It is useful for think about the following principles in the C4D Evaluation Framework:
- complexity: relying on dose patterns alone can assume linear (simple cause-effect) relationships between exposure and changes. While this approach may provide some interesting, it is good to combine it with other options, and explore the possibilities of feedback-loops, tipping-points and other complex interactions of factors.
Checking the timing of outcomes: checking that the timing of actual changes makes sense in terms of the timing of interventions. In C4D this could be checking to see whether the timing of changes in attendance at health clinics or community-led actions is consistent with timing of engagement in communication activities? It is useful for think about the following principles in the C4D Evaluation Framework:
- complexity: relying on timing of outcomes alone can assume linear relationships between exposure and changes. Social and behaviour changes are often long-term, incremental changes, reliant on a conducive context, rather than immediate and obvious change. This method can provide interesting insights, but should usually be combined with other lines of investigation.
Key Informants Attribution: where key informants are asked about the causes of change and whether this is linked to program activities through qualitative causal narratives.
- holistic: there is a risk with this method that participants will give the answers they think you want. To avoid this bias, start with open-ended qualitative exploration of what participants say led to the changes, rather than testing if the communication activities caused the changes.
Contribution analysis is a process that combines processes to check the consistency of the results that support causal attribution and strategies to investigate possible alternative explanations (strategy 3). For a resource detailing how to undertake contribution analysis click here.
The Tanzania CO undertook causal analysis of the Shuga Radio program's contribution to HIV/AIDS outcomes through checking the consistency of evidence (in combination with Ruling out possible alternative explanations). This example is consistent with the C4D Evaluation in the following ways:
- complexity: multiple lines of enquiry were used to come to some conclusions about causes. Multiple possible causes were identified, and each may have some contribution.
The Vietnam CO with their government counterparts developed a M&E plan that included causal analysis strategies through checking the consistency of evidence. see NationalProgramforChildProtectionCommunicationMEPlan.docx