This guest blog by Tiina Pasanen and Kaia Ambrose discusses how the Pathways to Resilience in Semi-arid Economies (PRISE) project approached the challenge of coming up with an outcome monitoring system that considered the dynamics and complexities involved in a multi-project, multi-country and multi-partner research consortium and shares some key lessons to come out of this. Feature image credit: Lancelot Ehode Soumelong.
Policy research programmes have multiple demands and expectations. They are increasingly being implemented through large scale inter- and transdisciplinary structures that involve multiple components, sectors, and sites. They also often address complex development problems such as climate change, which is characterised by uncertainty, varying levels of risks and lack of simple responses. Against this backdrop, research programmes are increasingly striving to go beyond producing and counting outputs and to have a wider impact beyond academia, including influencing policy decision-making and development practice.
Given this complexity, how can research programmes systematically but realistically define and assess outcome-level changes, learn about these changes across projects and partners, and use the data they collect to adapt and improve programme strategies and activities to enhance their influence?
PRISE and its tailor-made Outcome Mapping system
This is what we in the Pathways to Resilience in Semi-arid Economies (PRISE) set out to do when the project began in 2014. PRISE was a five-year, multi-country, multi-project and multi-partner research consortium that generated new knowledge about how economic development in semi-arid regions could be made more equitable and resilient to climate change. Our aim was to create a system to continuously capture, analyse and understand changes in stakeholder behaviour and actions around the research activities and results, and how these changes can ultimately lead to sustained shifts in policy and practice. Building on existing work done by ODI’s RAPID programme and using the Outcome Mapping (OM) approach, we designed an outcome monitoring system that fitted the specific needs of PRISE and considered the dynamics and complexities involved in a multi-project, multi-country and multi-partner programmes.
Five years later, though PRISE has come to an end, as the lessons learnt from its outcome monitoring system are still fresh in our minds, we wanted to share our experiences with other research programmes struggling with issues related to learning and capturing outcome-level change in dynamic contexts. In our recently-published reflective analysis and guide, we provide practical recommendations on how and why to design an OM-based outcome monitoring system, what is required and what some of the common challenges may be. Our overall key lessons learnt include:
First, it is crucial to understand what research team capacities are for influencing policy
Not everyone involved in a research programme may think they have (or want to have!) a role or a responsibility to influence policy actors, and certainly not everyone has the skill set or communication tools to do so. However, each member of a research team should be in agreement on a shared, common vision of policy influence and determine what their role is in terms of engaging the stakeholders involved with policy-related decision-making. For PRISE, this meant that researchers took on the responsibility to observe changes in the behaviour of stakeholders they were regularly in contact with and to record these changes in the project’s outcome monitoring system. In our experience, this required the research programme to build researcher capacity, provide them with coaching/ training on how to use the OM system and ensure researchers actively participated in reflection sessions held on data captured in the OM system.
Secondly, explicit and regular opportunities for reflection are necessary for adaptation
A crucial part of the outcome monitoring system was a biannual ‘sense making’, or reflection sessions at which PRISE Monitoring & Evaluation (M&E) focal points and researchers examined the data on stakeholder behaviour that had been collected. This not only gave research teams the opportunity to reflect on how stakeholders were engaging with the research evidence PRISE had generated, but also allowed them to make any necessary adjustments to their stakeholder engagement strategies. As part of this iterative process, it was important that learning and reflection happened on a regular basis and in a structured manner so that it became routine practice and not just a once-off event.
Thirdly, participation on monitoring and learning should happen at each stage of the project
It is important to dedicate (M&E) staff to the development and implementation of the outcome monitoring system but getting stakeholders involved in the programme to engage with the monitoring processes is essential for sustained country and project ownership. The joint reflection sessions on data captured in the OM system should not be the only instance where participation is meant to happen. For example, defining the outcomes that the research programme is looking to achieve should also be a participatory process involving a wider set of programme staff to build a shared, long-term vision across all partners and projects. Just like we need to constantly rethink who our stakeholders are, we also need to reflect on who should be part of the process of analysing and making sense of data collected in the outcome monitoring system.
And finally – the monitoring system doesn’t need to be complicated
While influencing policy is undoubtedly complex, monitoring it and learning from it in order to adapt and adjust engagement strategies can, and should, be realistic and doable. Though outcome monitoring in a large consortium with multiple partners and stakeholder groups requires resources, time and coordination, the system itself can be relatively simple. By focusing on key stakeholder groups, and utilising free and simple online tools, the process can be kept manageable. In PRISE, we collected observations via a Google Form which was located in a password protected knowledge management platform designed for the CARIAA (the umbrella consortium that PRISE belonged to). This ensured that all observations were stored and automatically categorised into a joint online Google spreadsheet. M&E advisor and manager, as well as M&E focal points, had the access and editing rights to the spreadsheet. They could e.g. re-categorise observations or add more information if the descriptions lacked details or weren’t clear enough for everyone to understand.
The outcomes we expect, like and love to see (terms that are used in the Outcome Mapping approach to categorise changes in stakeholder behaviour) don’t necessarily have to be complicated either. They can be as straightforward as a key stakeholder requesting data from a project or a policymaker inviting researchers to speak at an event. In the reflection sessions, asking simple questions to guide analysis and discussion can often be sufficient, for example: What worked in the past six months in terms of stakeholder engagement? What didn’t work? What do we need to change in our stakeholder engagement strategy?
Though this outcome monitoring system was tailored for the needs of the PRISE programme, our experiences can provide valuable insights into systematic processes and participatory approaches that can be used by other multi-project, multi-country, and multi-partner research endeavours. We have found out that other research programmes, such as The Sanitation and Hygiene Applied Research for Equity (SHARE), have similar experiences further highlighting the application potential of the outcome mapping-based approach.
We'd love to hear of other examples of doing this - leave us a comment below to let us know about your experiences, or your general thoughts and ideas!
This blog post is based on the authors’ co-written paper. The paper identifies a number of challenges, responses and lessons learnt that can be useful for other programmes planning to set up a similar system to measure and understand outcome level changes, particularly with regards to research uptake and policy influence. Read more.