Tiina Pasanen is a Research Officer for the Research and Policy in Development (RAPID) Programme at the Overseas Development Programme (ODI). In this blog, Tiina shares her top three realist ‘take-aways’ from the 1st International Conference on Realist Approaches to Evaluation and reflects on when or how realist evaluation may be most useful.
Raise your hand if you are a) a positivist b) a realist c) a constructivist, or d) somewhere between these categories. This was what we were asked to do during one of the keynote sessions at the 1st International Conference on Realist Approaches to Evaluation and Synthesis: Successes, Challenges, and the Road Ahead - an epistemological first for me at an evaluation conference!
Realist evaluation (RE) is a relatively new term - developed by Pawson and Tilley in 1997. It is a member of a family of theory-driven evaluation approaches which seeks to clarify root causes of programme outcomes by evaluating evidence from qualitative, quantitative and mixed-method research.
Hosted by the University of Liverpool, the conference brought together 150 researchers and evaluation practitioners to learn and share experiences on using RE and synthesis for assessing complex evidence for programmes, interventions or policy implementations in health or social sciences field.
Here are my top three ‘realist’ take-aways, as a beginner to the field:
1. Understand the limits of the ‘what works?’ question
It is important to remember that there are always winners and losers in social interventions. A development intervention (especially a complex one) will never work in the same way for all people and there are limits to using statistics of averages without understanding the diversity within the data. With this in mind, RE’s tagline ‘what works, for whom, in what circumstances and why?’ is clearly a better starting point than just ‘what works?’ – which is frequently seen in ‘traditional’ impact evaluations.
2. Pay attention to context and mechanisms
There are multiple ways to understand mechanisms in practice. To use a common metaphor: gravity is the mechanism which causes a ball to drop to the ground, but gravity will act differently in different contexts (e.g. the ball will drop at a slower speed on the moon than on earth). The mechanism is only observable by the effect (i.e. the ball drops). Mechanisms in social development programmes tend to be cognitive or emotional processes, such as trust. RE looks for context-mechanism-outcome hypotheses (configurations), which is one way to capture the complexity of an intervention. First, you have to understand the contextual factors – such as the environment, public policy, or socio-economic conditions – followed by identification of the mechanisms at play, and how they are triggered by the contextual factors.
3. Appreciate multiple sources of evidence
RE claims to be ‘method-neutral’ i.e. it does not impose the use of particular methods. While I question whether this is ever 100% possible (we evaluation practitioners tend to have preferred methods) I very much like the idea of mixing methods, synthesising evidence from multiple sources and trying to make sense of it. Usually, mixing methods, at least in traditional impact evaluations, aims to complement statistical analysis. But according to the RE approach, we are not seeking one ‘representative’ answer, but rather accepting that different results can reveal something about the diverse patterns of impact.
What was missing from the conference?
The conference facilitated thought provoking and timely debate, but there were a few missing dimensions. I was aware of a constant drive to define RE – and especially to distinguish it from randomised control trials, which are seen to represent the positivist approach, which was highly criticised at the conference. Unfortunately, this can keep the divided paradigm discussions alive (something that has been going on in the international development field way too long), instead of creating new possibilities for collaboration. However, by the end of the fourth day I did start to hear an increasing number of more balanced and collaborative comments emerging.
In my work, I’m surrounded by researchers and evaluators who focus on the production, ‘brokering’ and use of knowledge and evidence for policy. I was therefore surprised by the lack of this focus at the conference overall. Perhaps this is to be expected, given that it is a theory-driven approach and the academic environment incentivises the production of peer-reviewed articles over more policy-focused outputs and dialogue with key stakeholders. Therefore, I was pleased, to attend the final conference session, which focused on working with policy-makers.
Reflections on when or how RE can be most useful:
With my new RE lenses, I began to wonder whether researchers collaborating with policy-makers should start with an analysis of the context and mechanisms of policy-making. This could be used to identify what type of evidence and means of communication are required by policy-makers to make evidence-informed decisions in highly fluctuating and political environments. I leave you with my suggestions of where RE could be particularly suitable and useful:
- When it is more important to understand the mechanisms (reasons) behind the outcomes than knowing the prevalence of the effect.
- When an intervention has shown diverse patterns of impact and we want to understand why an intervention seems to work for some and not for others.
- When we want to apply an intervention to another context but are not sure why it works in the first place. Understanding the context and mechanisms behind the outcomes can strengthen the implementation of the programme to other contexts.
More on information Realist Evaluation: