From rigorous methods to rigorous processes – directions for travel after the RCT debate
It is neither relevant nor useful to either only criticise randomised control trials (RCT) or treat them as the only choice for rigorous impact evaluation (IE).
We need to look for other approaches and methods that can contribute to causal inference and systematically link observed effects to causes as well as extend what we mean by rigorous IE. Rigorous IE is not just about methodology - or indeed about only one method - but should cover each and all the stages of evaluation.
Next month at Wageningen University, the impact evaluation community will gather for a conference - Impact evaluation 2013 Taking stock and looking ahead, 25 & 26 March - asking, among other questions, ‘what influences evaluation design, communication and utilisation?’ A good moment then, to take stock of the current debates in this rapidly evolving environment, and think about where we go next, now that the debate is finally moving away from a narrow, single-method focused argument about the pros and cons of RCTs.
The explosion of enthusiasm for RCTs in development was accompanied by vocal commentators pointing out their failings, (see for example Lant Pritchett’s arguments or this Find What Works blog post on how politics and context confound measurement). Thankfully, there is now a growing consensus that the problem is not RCTs as such (see Kirsty Newman’s blog defusing the RCT bogeyman) but concentrating on only one method, treating it as ‘a golden rule’ and assuming that the explicit counterfactual approach to establish causality is the only possible way to produce evidence which is rigorous and useful.
A very good point, but it is not enough to leave the single-method debate behind – we also need to promote methodological pluralism by investigating possible IE approaches and methods and how to combine them (and not just in terms of mixing qualitative and quantitative methods) in ‘live’ IEs in different settings. We also need to broaden what we mean by rigorous IE.
Let's take a step back
Although it may sometimes seem that “randomistas” have hijacked the term ‘rigour’, rigorous IE is not supposed to be only about the “right” methodology. One of the original definitions of rigorous IE, from the Network of Networks for Impact Evaluation (NONIE), argues in its guidance paper that “rigorous impact evaluation is more than methodological design” and “no single method is best for addressing the variety of questions and aspects that might be part of impact evaluations”.
Even the International Initiative for Impact Evaluation (3ie), whose IE database is over 50% RCTs, states in its founding document that rigorous IE studies “are analyses that measure the net change in outcomes for a particular group of people that can be attributed to a specific program using the best methodology available, feasible and appropriate to the evaluation question that is being investigated and to the specific context”. RCT – or any other single method - is by no means always the best method available, feasible or appropriate.
Then two steps forward
First: we should broaden the methodological scope of IE and investigate other possible approaches and methods which could systematically and rigorously link causes and effects. Recent reports by White and Phillips (2012) and Stern et al (2012) do an excellent job with identifying potential approaches such as realist evaluation, contribution analysis and process tracing, but attention to methods tend to be somewhat weaker. Mixing methods is usually assumed to be advantageous for overcoming inherent limitations found in one method but what to combine, how to do it (in a way that it adds value) and in which context are difficult issues without clear answers. They need field-testing with actual development projects.
While learning how ‘traditional’ methods can work together in different contexts, we can also look to other discipline for innovation: the use of Social Network Analysis (originally coming from sociology) has been limited so far, but it has potential to explain the structure of relationships between actors and the changes in the frequency of flows of information. Also experimental games, predominantly used by behavioural economists, may have potential in development to explain why participants don’t often behave in the expected manner and why some tools or services aren’t used as expected and intended.
Second: it is crucial to extend the discussion to include not only methods but also processes. While the recent debate has concentrated on RCTs’ potential to construct a valid counterfactual we tend to forget other essential elements such as feasibility, relevance and utility as well as institutional learning. If all the stages of evaluation are not rigorous, then even a very robust IE design can suffer from flawed implementation or create evidence that agencies cannot utilise, whatever the method used.
In the end, IEs are not just about accountability to donors, it is also about learning (one of the objectives of the AusAID funded Methods Lab at ODI and a focus of the Wageningen conference). Aid implementing institutions would greatly benefit from learning not just from the evidence produced (often by external evaluators), but also from the process: how to diagnose evaluation problems, how to decide whether IE is the right evaluation solution, how to manage the evaluation process, how to assess the quality of results, how to disseminate results. It is time to broaden the discussion from one rigorous IE method to multiple (rigorous) methods and still further, to rigorous IE process.
Guest blog written by Tiina Pasanen, Research Officer, Research and Policy in Development programme, ODI.