52 weeks of BetterEvaluation: Week 32: Monitoring and evaluating policy influence and advocacy (Part 1)

Simon Hearn

This two part mini-series looks at monitoring and evaluation of policy influencing and advocacy. This blog introduces a great new paper from Oxfam America exploring this topic from an NGO perspective and the second blog will present the perspective of a research programme

This two part mini-series looks at monitoring and evaluation of policy influencing and advocacy. This blog introduces a great new paper from Oxfam America exploring this topic from an NGO perspective and the second blog presents the perspective of a research programme.

Influencing policy and practice is an important component of many social interventions. Raising the debate on important issues, informing the public, advising decision makers, supporting the use of evidence, lobbying for a particular position - these are all common activities for NGOs, think tanks, media organisations and campaigners alike (albeit in varying mixes). And all these activities share the challenge of how to measure the effect they have on actual change in policy and practice.

The particular question of how NGOs monitor, evaluate and learn (MEL) from their campaigning work was the focus of a recent paper from Oxfam America: Monitoring, evaluation and learning in NGO advocacy. The authors, Jim Coe and Juliet Majot, surveyed and interviewed MEL staff, campaigners and managers from nine organisations who volunteered to be part of the study. Gabrielle Watson, the Manager of Policy Advocacy Evaluation at Oxfam America, offers her highlights of the study in a recent blog, emphasising the significance for MEL practitioners in particular.

The report presents a number of lessons drawn from the findings that will be of particular interest to BetterEvaluation. The authors unpack twelve practices for good advocacy MEL:

  1. Ensure that centralized systems and parameters invite localized adaptation.
  2. Subject moves towards quantifying information to a ‘robustness test’ to ensure that any such analysis and dissemination supports meaningful use.
  3. Give particular focus to testing the links in the chain of change, rather than merely assessing the various elements in isolation.
  4. Develop systems that fully contextualize contribution, including understanding the intervention of other actors and an overall sense of complex dynamics at play.
  5. Design MEL systems to fit around existing advocacy programs, establishing a firm link to planning, including strategic planning and budgeting processes.
  6. Build on the motivations and interests of different users, and their different uses of data and analysis, to devise learning moments and opportunities at key short medium- and longer-term stages of the advocacy program. 
  7. Secure active involvement of senior managers in review and analysis processes.
  8. Prioritize the facilitative role of MEL professionals in building evaluative capacity organization-wide, including through design (and constant iteration) of ways of working that make it easy for people to engage meaningfully in MEL processes.
  9. Take active steps to rebalance accountabilities where necessary, countering a clear tendency to prioritize upwards accountability, to funders in particular.
  10. Pay particular attention to building capacity for strategic – as well as tactical – learning and adaptation.
  11. Develop an overarching approach to MEL that is intentionally designed to challenge and test strategy and the assumptions underlying it, as well as to improve implementation of existing strategy.
  12. Gather evidence of MEL costs and benefits.

The challenge of evaluating advocacy and policy influence has captured the attention of many and there is a growing body of knowledge on the subject which BetterEvaluation has attempted to synthesise in the theme page: Evaluating policy influence and advocacy. The page offers advice on evaluating four types of influencing: advising, advocacy, lobbying and activism.

This blog has looked at advocacy from an NGO perspective. The next blog will look at how to monitor and evaluate the influence of research on policy, and describes the practice of a research facility in Australia designed to inform health policy in South East Asia and the Pacific.


  • Monitoring, evaluation and learning in NGO advocacy
    This state-of-the-art report, commissioned by Oxfam America, describes how nine advocacy and campaigning organisations in the UK and US undertake monitoring and evaluation of their campaigns. The study was undertaken by two independent evaluators experienced with advocacy initiatives and involved a voluntary cohort of nine NGOs who were surveyed and interviewed.
  • A guide to monitoring and evaluating policy influence
    Using a literature review and interviews, this paper by Harry Jones aims to provide an overview of the different approaches to monitoring and evaluating policy influence.