Global innovations in measurement and evaluation

Anoushka Kenley

This guest blog is by Anoushka Kenley from NPC, who is one of the authors of NPC's recent report on Global Innovations in Measurement and Evaluation. 

Are we too steadfast in our approach to evaluation? Too stuck in our ways?  What are people doing differently to make evaluation more informative or help us get the data we need?  That’s what NPC wanted to find out when we interviewed experts from across the globe to find out about the most exciting trends in evaluation today.

Some of our discussions focused on technology—how natty solutions to data gathering or developments in machine learning are opening up the possibilities of evaluation.  But a lot of our interviewees talked about the culture of evaluation and sorts of questions evaluators want to ask.  In part, this seems to be driven by changes in the wider environment—in a world where people give instant feedback and expect instant responses, evaluations published months or years after a programme is delivered are less appealing.  And changes in the culture of programme delivery, like shifting the focus of accountability downwards towards service users (rather than upwards towards funders), are influencing the design and objectives of evaluation too.

In the end we came up with a list of eight trends that are shaping how we do evaluation, and in the report we illustrate these with examples of where organisations are doing these well.  For the full list, you can read the report (Global innovations in measurement and evaluation).  To whet your appetite, here are three of the trends we discuss…

Impact management: Evaluation that can drive delivery

Evaluation is increasingly driving programme design and delivery, rather than being used to asses how we got there. It is taking inspiration from the private sector, where regular customer feedback influences decision-making in speedy feedback cycles. More approaches in evaluation are focused on quick learning and opportunities to feed into programme design on an ongoing basis.  Specific techniques like developmental evaluation or adaptive management are dynamic, with routine data collection integrated in strategy and performance management. 

For an indication of their growing popularity, see the USAID and DfID funded Global Learning for Adaptive Management (GLAM) programme, for example. The same principles of iterative evaluation can be found in programmes like Acumen’s Lean Data, which uses relatively low-tech solutions to help evaluators get data faster and act on it. These sort of regular, light touch evaluation designs can be particularly useful in the early stages of programme development.

Remote sensing: Technology that opens the door to new data possibilities

Remote sensing technology is helping tackle data collection challenges, particularly in remote locations.  The Clean Cookstoves example demonstrates the effectiveness of sensors where traditional data collection would have been expensive and time consuming. Evaluating the impact of the Global Environmental Facility’s international anti-land degradation programmes would have been impossible, without their satellite monitoring of forest landscapes.

Datasets can also be richer and more accurate when they’re gathered by sensors. The array of things urban sensing project in Chicago is a network of sensors installed around the city that collect real-time data on variables ranging from air quality to pedestrian traffic.  It will generate a comprehensive, real-time (and open) dataset that paves the way for inventive evaluation projects.

Theory based evaluation: Framing the question to help us better understand complexity

Theory based evaluation—approaches like realist evaluation—test not only if a programme works, but understand how and why it works (or not).  They help us recognise, and address in evaluation design, the complexity of social systems, which can get us further than traditional evaluation of ‘what works’.  And they are useful for generating insight into how successful programmes can be replicated in other settings. Theory based evaluation isn’t a cutting edge concept, but it made our list because it’s spreading beyond academic spheres and becoming more commonly used among governments and NGOs.  See VSO’s evaluation of the impact of volunteering or DfID’s evaluation of their Building Capacity to Use Research Evidence (BCURE) programme for examples. 

But will these trends apply to me?

Some of the techniques or applications discussed in the report might not be relevant to all organisations or projects.  But the principles behind them are.  For example, while adaptive management might not be an appropriate approach for your programme, it might still be worth reflecting on how you can design an iterative feedback cycle that helps evaluation influence ongoing programme design.  Conducting a full realist evaluation might not appeal, but starting with a theory of change or trying to evaluate the mechanism of change as distinct from programme context might strengthen your results. The range of trends we highlight in the report are intended to inspire us all to think a little differently about how we can strengthen our evaluations.


Global innovations in measurement and evaluation is published by NPC, with support from the UK Department for International Development, Oxfam, the NSPCC, Save the Children and Baites Well Braithewaite.  It is available to download for free at

Related content