This briefing was commissioned by BBC Media Action to investigate the potential to use experimental and quasi-experimental designs to create a counterfactual, which is one option for investigating causal attribution and contribution. This briefing finds that the use of such methods to evaluate the impact of media and communication programs is sometimes possible, but that the need for flexibility often makes it difficult to evaluate using these methods and the complexity of media development hinders the efficacy of such experimental designs.
The Evaluating C4D Resource Hub sits within BetterEvaluation and houses a growing collection of the available guides, toolkits, tools and methods to use for research monitoring and evaluation (R,M&E) of Communication for Development (C4D) initiatives. The Hub is structured around two combined frameworks:
C4D Evaluation Framework (represented by the circle) is an approach. It describes the values and principles that guide our decisions in C4D.
The BetterEvaluation Rainbow Framework (represented by the rainbows) is a structure. It organises the practical tasks into seven categories or 'clusters' and provides options.
While the resource recommendation below discusses the resource specifically in relation to its usefulness for evaluating C4D within the Evaluating C4D Resource Hub's C4D Framework, this resource may also be of use for people working in other contexts and with different frameworks.
Authors and their affiliationDevra Moehler
Year of publication
Type of resource
The review collates a series of examples where experimental design to create a counterfactual have been used in media and communication initiatives. In many cases, the tested variable is changes in behaviour. One example (page 13) tested changes in attitudes relating to prejudice and conflict from listening to radio dramas comparing treatment and control radio listenership groups. The control radio listenership groups heard a radio drama on an unrelated topic. Importantly, the review also finds that designing such approaches in this context is challenging and may not be always be feasible.
Who is this resource useful for?
- Communication for Development Pracitioners
- Program Officers
How have you used or intend on using this resource?
This resource has been identified as particularly useful for evaluating of communication for development (C4D). It was identified as part of a research project in collaboration with UNICEF C4D.
Why would you recommend it to other people?
This research briefing would be valuable for teams considering options for investigating causal attribution and contribution, including using experimental designs (randomised control trials), to assess the impacts of a program. It provides some cases on which future research designs modelled off, while also providing salient advice on when this type of approach may not be suitable. This example is consistent with the C4D Evaluation Framework in the following ways:
- accountable: program teams are often asked to consider experimental designs, since this kind of evaluation can provide credible evidence about whether a program works. However, being accountable also means understanding when this approach is feasible and will deliver credible results.
The C4D Evaluation Framework would suggest the need to reflect the following issues when considering using an experimental design:
- complexity: as with all experimental and quasi-experimental designs, this creation of a counterfactual in the design of the research initiative required standardised implementation, and therefore did not allow the flexibility for adaptive and emergent approaches to C4D to be used.
- participatory: experimental and quasi-experimental designs are generally not associated with participatory approaches, due to the need for standardisation and specific technical expertise.
Moehler, D. C. Democracy, governance and randomised media assistance. Available at: http://downloads.bbc.co.uk/rmhttp/mediaaction/pdf/democracy-governance-research-report.pdf (Accessed: 14 February 2017)