52 weeks of BetterEvaluation: Week 33: Monitoring policy influence part 2 - like measuring smoke?

By
Arnaldo Pellini

In the second part of our mini-series on monitoring and evaluating policy influence, Arnaldo Pellini, Research Fellow at the Overseas Development Institute, explores a project supporting research centres in Australia to monitor their impact on health policy in Southeast Asia and the Pacific. 

Arnaldo explores the main challenges and makes some recommendations for others looking at the M&E of policy influence. 

Read part one of the mini-series Monitoring and evaluating policy influence and advocacy (Part 1).

When I think about measuring policy influence of research the main character of 1995 Wayne Wang’s movie Smoke often comes to mind. At some point in the movie he tells the story of how an Elizabethan made a bet with his friends that he would be able to measure the weight of smoke. He proved his point by taking a cigar, weighing it, then smoking it and weighing the ashes and the butt and calculating the difference: the weight of smoke.

This story reminds me that with creativity and maybe imagination it is possible to measure the influence of research on policy, which can seem as impalpable as smoke.

In this blog I describe how ODI’s Research and Policy in Development programme supported four Australian university centres conducting policy research funded by AusAID in the health sector in Southeast Asia and the Pacific under the Health Knowledge Hub initiative (HKH) between 2008 and 2013, to measure their influence on health policy.

The HKH initiative set up knowledge hubs at four universities to establish collaborations within academia and with policy and practitioners to bring research evidence into policy decision processes and improving the effectiveness of health systems in the Asia–Pacific Region.

We anticipated some of the challenges would be:

  • Some of the hubs have been more involved, in addition to research, in capacity development of government counterparts which can make difficult to assess the policy influence of the knowledge they generate.
  • The data collection of policy influence data, such as for example on the uptake of the research, may be incomplete as it started while the HKH initiative was already underway.
  • Whereas policy influence monitoring plans would normally be integrated with a project’s logframe from the beginning, the hubs already had a set logframe and we had to find ways to include new policy influence indicators and means of verification within that.
  • A framework for monitoring and evaluating policy influence

In her Making a Difference: M&E of Policy Research, (a RAPID staple!) Ingie Hovland identified five performance areas: 1) strategy and direction, 2) management processes, 3) outputs, 4) uptake, 5) outcome and impact.

Hovland’s framework helped the Hubs to include indicators of uptake of knowledge products in their logframe monitoring, e.g. web stats and qualitative indicators of contributions to policy change in the form of changes in attitudes, perception and legislation, assessed through case studies and stories of change.

Here are a couple of examples. We agreed that the original logframe included policy influence indicators that were too general such as ‘changes in policy / practice / use in topic / theme area’. These were made more specific on the types of policy change and expanded to include changes in attitudes and perceptions by policy-makers or practitioners who are then able to act as intermediaries to influence policy-makers. We also agreed to include qualitative case studies developed through interviews with policy-makers, practitioners, and capacity development participants as means of verification of these types of changes.

A second example: the original logframe indicator related to publications was to ‘identify particularly significant / high quality studies / papers in terms of contribution to knowledge.’ Following our discussion we agreed that as the hubs publish a large number of publications it would be useful to monitor the uptake of all of them by tracking requests, online access/downloads, and references in other research or policy documents. This information would complement other indicators and be a proxy of the success and influence of the policy research pieces produced by the hubs.

Documenting influence

Stories of change are one way to document the impact of research. They are short pieces that do not focus on the research process, but describe where research led to changes in policy, practice, knowledge, behaviour and/or attitudes. The Hubs agreed to use them to document the influence of research and capacity development activities on key stakeholders and counterparts in Southeast Asia and the Pacific, based on a suggested structure (PDF, 1.5MB) derived from work RAPID did at the Vietnam Academy of Social Science. Developing the SoCs wasn’t straightforward; the early drafts focused on activities and outputs rather than changes in policies or attitudes. However, they ultimately drew up some interesting stories that complemented the other M&E areas.

Reflections on lessons learned from this experience

  • Not all research (or capacity development) has to reach policy-makers to have an influence on policy and practice. Developing the SoC not only produced evidence of the Hubs’ policy impact, they also helped them to refine their role as knowledge suppliers and influencers of intermediary organisations who would then have the necessary connections to influence national policy.
  • The M&E of policy influence framework should be developed in parallel to a policy influencing plan that describes areas of policy influence, key stakeholders and ways to manage the policy influencing activities and communications. It is far easier to integrate M&E of policy influence from the beginning of an initiative than to include it post hoc.
  • It pays to have an expert: the Hubs with dedicated communications managers, unsurprisingly, produced the best SoCs. While this isn’t always possible, we could have done more work with the hub communication managers to expand the range of communication channels they used and discussed how the universities could have supported hub communications plans. With this sort of project, responding to needs flexibly and keeping up regular communications counts. 

Resources

  • Evaluating policy influence and advocacy

    In this BetterEvaluation theme page, we present a round-up of methods and approaches to evaluating policy influence and advocacy.

    "Influencing and informing policy is the main aim for many development organisations. However, activities directed at policy change are, in general, very hard to monitor and evaluate. As policy change is often a complex process, it is difficult to isolate the impact of a particular intervention from the influence of other factors and various actors. In addition, monitoring and evaluation tools usually used in managing interventions can be difficult to implement in these contexts."

  • A guide to monitoring and evaluating policy influence
    Using a literature review and interviews, this paper by Harry Jones aims to provide an overview of the different approaches to monitoring and evaluating policy influence.

'52 weeks of BetterEvaluation: Week 33: Monitoring policy influence part 2 - like measuring smoke?' is referenced in: