Blogs

52 Weeks of BetterEvaluation: Week 38: Ubuntu in evaluation

Benita Williams's picture 17th September 2013 by Benita Williams

The South Africa Monitoring and Evaluation Association conference starts on Wednesday this week, with the theme ‘Improving use and results’. On Thursday, the programme includes a session called ‘Made in Africa: evaluation for development’, exploring values and diversity in development evaluation. To kick off discussion, we asked Benita Williams, an evaluator from Pretoria, South Africa, about how her values affect her evaluation work. 

52 weeks of BetterEvaluation: Week 37: Collaborative Outcomes Reporting

Jess Dart's picture 13th September 2013 by Jess Dart

Collaborative Outcomes Reporting (COR) is an approach to impact evaluation that combines elements of several rigorous non-experimental methods and strategies.  You’ll find it on the Approaches page on the BetterEvaluation site - an approach combines several options to address a number of evaluation tasks. This week we talk to Jess Dart, who developed COR. Jess is the new steward for BetterEvaluation’s COR page and, together with Megan Roberts from Clear Horizon, has provided a step-by-step guide, advice on choosing and using the approach well, and examples of its use.

52 weeks of BetterEvaluation: Week 35: Social return on investment in evaluation

Wouter Rijneveld's picture 29th August 2013 by Wouter Rijneveld

In this week’s blog we interview Wouter Rijneveld, a consultant working on measurement and utilisation of results, mainly in international development. He recently published a paper on the use of the Social Return on Investment approach in Malawi and we wanted to find out about his experience of using this less-reported approach. We were doubly interested when he told us that he was initially skeptical about SROI.

52 weeks of BetterEvaluation: Week 34: Generalisations from case studies?

rickjdavies's picture 22nd August 2013 by rickjdavies

An evaluation usually involves some level of generalising of the findings to other times, places or groups of people. If an intervention is found to be working well then we could generalise to say that it will continue to work well, or it will work well in another community, or when expanded to wider populations. But how far can we generalise from one or more case studies? And how do we go about constructing a valid generalisation? In this blog, Rick Davies explores a number of different types of generalisation and some of the options for developing valid generalisations.

52 weeks of BetterEvaluation: Week 33: Monitoring policy influence part 2 - like measuring smoke?

Arnaldo Pellini's picture 13th August 2013 by Arnaldo Pellini

In the second part of our mini-series on monitoring and evaluating policy influence, Arnaldo Pellini, Research Fellow at the Overseas Development Institute, explores a project supporting research centres in Australia to monitor their impact on health policy in Southeast Asia and the Pacific. Arnaldo explores the main challenges and makes some recommendations for others looking at the M&E of policy influence. 

52 weeks of BetterEvaluation: Week 32: Monitoring and evaluating policy influence and advocacy (Part 1)

Simon Hearn's picture 2nd August 2013 by Simon Hearn

This two part mini-series looks at monitoring and evaluation of policy influencing and advocacy. This blog introduces a great new paper from Oxfam America exploring this topic from an NGO perspective and the second blog will present the perspective of a research programme

Mixed methods in evaluation Part 3: Enough pick and mix; time for some standards on mixing methods in impact evaluation

Tiina Pasanen's picture 1st August 2013 by Tiina Pasanen

In our third blog on mixed methods in evaluation, Tiina Pasanen from ODI focusses in on impact evaluations (IEs) – a specific type of evaluation with a lot of attention in international development right now, with hundreds being conducted every year. The clear majority of them are based on quantitative data and econometric analysis. There is much talk about the importance of combining methods to triangulate results and to better understand why something works, but in reality these mixed methods IE designs are still rare and are often failing to provide enough information for readers to follow and assess what has been done and why. As the number of mixed methods IEs is likely to grow in the next few years, should there be minimum standards as to what constitutes as a mixed methods design?

Mixed methods in evaluation part 2: exploring the case of a mixed-method outcome evaluation

Willy Pradel's picture 31st July 2013 by Willy Pradel

We continue our mini-series on mixed-methods in evaluation with an interview with the three authors of the recently published paper: Mixing methods for rich and meaningful insight

Willy Pradel and Gordon Prain from the International Potato Centre in Lima, Peru and Donald Cole from the University of Toronto discuss the evaluation they recently conducted which applied a mixed-methods approach to capture and understand a wide variety of changes to organic markets in the Central Andes region. This case demonstrates a good rationale for choosing a mixed-method design and also an authentic implementation that effectively mixes quantitative and qualitative data to enhance the value of each.

Pages