Week 18: is there a "right" approach to establishing causation in advocacy evaluation?

By
Josephine Tsui

We’ve talked before on this blog about evaluating advocacy interventions. One of the hottest debates is how and to what extent it is possible to establish causation in advocacy programmes.

Here, Josephine Tsui, a Research Officer at ODI and co-author of a new report ‘Monitoring and evaluation of policy influence and advocacy’, explains the issues and how we can approach this thorny issue.

In 2012, a large number of Independent progress reviews (IPR) were commissioned by DFID’s Programme Partnership Agreement programmes. The firm I was working for at the time had won several of these IPRs and the same story was coming from each programme.  There was a tension between DFID needing concrete results of impact to justify the renewal of the unconditional funding privileges of PPAs, and NGOs resenting the burden of needing to prove their advocacy impact. As for many evaluators, it put me in the uncomfortable position of voicing the NGOs’ concerns and challenges of the M&E system to DFID and questioning their evidence of impact.

For me, the key question that came out of the process was: How can we get the balance right in ensuring organisations are accountable for their advocacy success while taking in the complexity of the real world? The crux of the problem was how we can measure causation rigorously in widely varying and frequently complex contexts, with limited resources. A new report from the Research and Policy in Development team at ODI commissioned by the Bill and Melinda Gates Foundation, ‘Monitoring and evaluation of policy influence and advocacy’, explores this debate and presents a range of approaches and tools for practitioners and donors wishing to measure the impact of their advocacy interventions. The report was commissioned to help Bill and Melinda Gates Foundation staff approach the evaluation of their policy-advocacy work.  It details the evolution of M&E in advocacy and policy influence, discusses the theories and approaches of how interventions are designed to work, presents some examples of methods and tools used in monitoring and evaluating advocacy, and some case studies of where and how they have been used.

There are many interpretations of causation. A lot of people talk about contribution and attribution to describe causation but these terms are not used systematically. For example, some people use the term attribution to imply the change is 100% caused by the intervention while others use it as a precise measure of the degree to which the intervention has contributed to the change. Others feel that this approach could place too much importance on the on the intervention at the cost of understanding other contributory factors, and prefer to use contribution as the main model for understanding causation. In the advocacy field, the majority of evaluators and practitioners advise against measuring attribution in favour of understanding contribution. At the same time, there are people who advise that causation can only be measured using a valid counterfactual. Are these ideas conflicting?

I was reflecting upon my experiences in evaluations while working on the paper with my colleagues Simon Hearn and John Young. The general consensus in the evaluation community is that the difficulties of operating in a complex environment should not deter us from investigating causality in evaluations. But there are other approaches to measure causality without having a counterfactual and the paper details five other methods that focus on drawing links between our interventions and the impact that do not require counterfactuals. They can be divided into two groups.

The first group includes methods that draw links between interventions and impact.  By looking at the different types of interactions between cause and effect, relationships can be drawn to determine probable causality (Befani, 2012). For example, neither leaking gas nor a lit cigarette will produce fire unless they are combined. In the paper we detail two methods which draw links between situations, events and outcomes in a similar manner; the Qualitative Comparative Analysis and the RAPID Outcome Assessment method. For example, one case study in the paper uses a derivative of the outcome mapping method to draw patterns between interventions and responses from boundary partners.

The second group includes tools that use theories or methodologies that link causes and contextual factors. For example, one case study used in the working paper uses general elimination analysis to determine the impact of a campaign. Through the process of eliminating all alternative explanation they were able to conclude the campaign had a large impact on the US Supreme Court policy. The working paper also describes process tracing and contribution analysis.

There is no consensus about the ‘right’ approach to advocacy evaluation. Most evaluation experts stress that the choice of approach, method and tools should be determined by the evaluation questions and the context and that there is no hierarchy of methods.  The methods and tools we discuss in the paper are based on strong social science principles, and thus provide the rigor necessary to determine causation of impact but are accessible and practical and have been widely tested in complex environments.

The field of advocacy evaluation is growing fast and there are literally hundreds of methods and approaches that evaluators, commissioners and managers of evaluations can draw upon. With the appropriate models and frameworks it is possible to find the balance between donors’ need for evidence of impact, without being overly burdensome for NGOs.

'Week 18: is there a "right" approach to establishing causation in advocacy evaluation?' is referenced in: