Week 7: Innovation in Evaluation

Patricia Rogers's picture 20th February 2014 by Patricia Rogers

This is the first in a series of blogs on innovation which includes contributions from Thomas Winderl and Julia Coffman. The series will lead up to the African Evaluation Association conference at the beginning of March in Yaounde, Cameroon, where BetterEvaluation will be sponsoring a strand on methodological innovation.

We consider innovations to be methods and approaches to evaluation that are actually “new”.  They are not simply a relabelling of existing knowledge with a new, proprietorial label. Many people come into evaluation without formal training, or with training that does not provide a good understanding of the range and scope of evaluation practice and theory.  So sometimes, they claim that their approach is innovative (such as using a mix of qualitative and quantitative data in a single evaluation) when in fact it is well-established as good evaluation practice.

Invention, bricolage and translation

It’s worth thinking about different types of innovation.  Some innovations in evaluation involve invention of new technology.  The possibilities that big data present in terms of tracking events through social media or geo-tagging were simply not possible in earlier years when these technologies were not available.

Some innovations are a bricolage, or a patchwork, of previous ideas and techniques brought together more coherently and used  more systematically. For example, Collaborative Outcomes Reporting technique brings together existing methods of contribution analysis, data trawl and expert panel into a package that makes these pieces fit together in a more accessible way.

Some innovations involve borrowing ideas and methods from other disciplines and professions.  Approaches to causal inference for evaluation have been imported from agricultural science, clinical trials, public health,  political science, law and history. Different ways of doing interviews have been borrowed and adapted from anthropology and market research.  All evaluators can contribute to this innovation by bringing across techniques from their primary discipline or from other aspects of their lives (Stephanie Evergreen has recently demonstrated this in her use of fortune cookies to communicate evaluation results).

And some innovation is about learning from practice and thinking about a new role for evaluators.  Rather than seeing evaluation (and the work of evaluators) as being something that comes along after a programme has been designed, and sometimes after it has been implemented, and trying to add value to later decisions, there is increasing interest in how the process of evaluation, and the work of evaluators and others doing evaluation, can contribute to ongoing improvements in implementation, and to improved planning and design up front.  Real-time evaluation, developmental evaluation and positive deviance are examples of approaches to evaluation that support improved planning and ongoing learning.

Worthwhile innovation

What sorts of innovation are actually new - and useful? Where is innovation most needed - and where is it a distraction from doing the basics well?  –

Good innovations add value.  The growing interest in applying complexity ideas, for example, has arisen because they can help us understand and improve programmes and policies, not because they are trendy. Big data is becoming popular because it can provide insights not available through other means. 

This means that innovation is likely to be most helpful where existing knowledge is not enough to do what is needed.  Identifying these areas is therefore an important part of supporting effective innovation.

Supporting innovation

Innovation is hard.  It is not always clear what should be done and, when applying something that hasn’t been done before, we need to anticipate that it may not work.  Supportive structures  (and the right expectations) are needed for systematic experimentation and learning. 

Two current projects illustrate some ways of systematically supporting innovation in evaluation.

The Australian Department of Foreign Affairs and Trade (DFAT) is supporting the Methods Lab, in collaboration with ODI and BetterEvaluation. . It is experimenting with a variety of methods for improving impact evaluation, developing and trialling materials to guide the selection and implementation of different methods.  

The Office of Learning, Evaluation and Research (LER) in the Bureau for Planning, Policy and Learning in USAID has begun a process of experimenting  with complexity-aware monitoring.  It has produced a discussion note outlining four possible methods, http://usaidlearninglab.org/library/discussion-note-complexity-aware-monitoring which also includes suggestions for systematically experimenting with new methods. 

Join us as we explore innovation in evaluation over the next few weeks, including live tweeting from Yaounde on #AfrEA14. We look forward to hearing  your experiences, suggestions and questions.

Further reading and resources

Collaborative outcomes reporting

Collaborative Outcomes Reporting (COR) is a participatory approach to impact evaluation based around a performance story that presents evidence of how a program has contributed to outcomes and impacts, that is then reviewed by both technical experts and program stakeholders, which may include community members.

Read more

Big Data

Big data refers to data that are so large and complex that traditional methods of collection and analysis are not possible. It includes 'data exhaust' - data produced as a byproduct of user interactions with a system.

Read more

Methods Lab: Improving practice and building capacity for impact evaluation in DFAT

Between 2012 and 2016, the Methods Lab will develop, test and seek to institutionalise flexible, affordable approaches to impact evaluation. Cases will be chosen that are harder to evaluate due to the complexity of their context or diversity of intervention variables. The Methods Lab approach combines a hands on ‘learning-by-doing’ style with commissioning and implementing agencies, mixed methods for collection and analysis, providing guidance to ensure rigorous thought, sharing experiences via an international platform (Better Evaluation) and institutionalising processes and best practices. The case focus and Methods Lab approach, while focusing on DFAT and its programmes, will ensure that results will have wide applicability.

Read more

Discussion Note: Complexity Aware Monitoring

PPL's Office of Learning, Evaluation and Research (LER) has developed a Discussion Note: Complexity-Aware Monitoring. This paper is intended for those seeking cutting-edge solutions to monitoring complex aspects of strategies and projects.

Read more

52 Weeks of BetterEvaluation. Click here to view past features

Image source: Light Bulb Basil, by Johannes H. Jensen

A special thanks to this page's contributors
CEO, BetterEvaluation.


Patricia Rogers's picture
Patricia Rogers

Hi Professor Phatak,  

Great to hear about your interest in innovative methods in evaluation.  They are a particular area of interest on the BetterEvaluation website as there is not so much information about what they are and how to apply them.  You can find links to various resources and information about specific methods.  

It's also worth thinking about how innovation might be most useful for your students.  What are the challenges that current methods can't adequately address?  Where is innovation needed?  

I will be exploring these issues in July at the IPDET course in Ottawa and will share new examples and materials through the site.

Patricia Rogers

Patricia Rogers's picture
Patricia Rogers

Thanks for your comment, Thadeo.  There is much to learn about the intersection between machine learning and traditional evaluation.  How would you suggest evaluators develop skills in using machine learning , and machine learning experts develop an understanding of evaluation concepts and techniques - or should we aim to build multidisciplinary teams with experts in both?

Patricia Rogers's picture
Patricia Rogers

Thanks for your comments, Thadeo.  I agree that evaluation practice needs to incorporate new methods such as machine learning to analyse big data.  Are there some specific resources you'd recommend for evaluators to start learning from?  Do you have some good examples where this has been used in an evaluation? 

Because evaluation is interdisciplinary, people come into it from different disciplinary backgrounds, and bring the benefit of that.  It's terrific that you are in a position to demonstrate the value that computer science can bring to evaluation,

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.