Our second year of 52 weeks of BetterEvaluation! Check out last year's here.
Simon Hearn continues BetterEvaluation’s theme on the monitoring and evaluation of policy change by suggesting a set of measures to help those struggling to monitor the slippery area of policy influence and advocacy. For more on this theme, see Josephine Tsui’s blog on attribution and contribution in the M&E of advocacy and Julia Coffman’s on innovations in advocacy evaluation.
No ano passado conheci um grupo de avaliadores brasileiros em uma conferencia, e soube através deles a crescente demanda por bons estudos avaliativos no Brasil, mas também a necessidade de mais iniciativas de capacitação nesta área, e informações práticas e relevantes escritas em um contexto local, em língua portuguesa. Foi então que resolvi pesquisar em que este grupo tão entusiasmado estava trabalhando. Não somente grupos no Brasil, mas em outros países de língua portuguesa.
One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people. Sometimes the same word is used but to mean quite different things; other times different words are used to mean the same thing. And, most importantly, many people are simply unaware that other people use these words in these different ways.
We’ve talked before on this blog about evaluating advocacy interventions. One of the hottest debates is how and to what extent it is possible to establish causation in advocacy programmes. Here, Josephine Tsui, a Research Officer at ODI and co-author of a new report ‘Monitoring and evaluation of policy influence and advocacy’, explains the issues and how we can approach this thorny issue.
In February, BetterEvaluation hosted a webinar on working with children in evaluation. Mallika Samaranayake and Sonal Zaveri of the Community of Evaluators South East Asia, presented their participatory approach to conducting evaluations of, with and by children. During the webinar, Mallika and Sonal answered a number of questions from the audience: here we’ve selected some of the highlights.
Continuing our season of blogs on presenting evaluation findings in ways that will get them read (and hopefully used), Joitske Hulsebosch, an independent consultant, contributes her ideas on how to present your findings in the form of an infographic. Catch up on recent contributions from Rakesh Mohan and Patricia Rogers on sharing evaluation findings.
A few weeks ago we responded to a question from BetterEvaluation user Rituu B. Nanda on interesting ways of presenting data in evaluation reports. The conversation continued on the American Evaluation Association LinkedIn group. This week we're sharing some ideas from Rakesh Mohan on ways of making evaluation reports more interesting. Rakesh is Director at the Office of Performance Evaluations, Idaho State Legislature. He discusses how his team presented the findings of different evaluations which were intended for both policy-makers and public audiences.
Alan Mountain supports BetterEvaluation while he completes his Masters of International Development at RMIT University. In this blog, he looks at which resources have been most helpful to him as a new-comer to evaluation, both to understand the essentials and dive into more detail on different aspects.
This week we start the first in an ongoing series of Real-Time Evaluation Queries, where BetterEvaluation members ask for advice and assistance with something they are working on, together we suggest some strategies and useful resources - and then we find out what was actually useful (or not) and why.
Innovation is a relative concept. It is about new practice … for the topic and person or group in question. The context-specific nature of what constitutes ‘an innovation’ became clear during a recent event around global transparency and accountability efforts.
BetterEvaluation is at the African Evaluation Association Conference in Yaounde, Cameroon this week. We are giving away a book each day as a prize draw. If you are not there or if you miss out on the prize then you can find where to buy the books and where to get free information and resources online related to each book.
Julia Coffman is Director of the Centre for Evaluation Innovation. In the third blog of our innovation in evaluation series, she looks some recent innovations in a notoriously tricky area: advocacy evaluation. Last week, Thomas Winderl explored how development evaluation must evolve to meet the challenge of complexity and responsive planning. This week we’ll be reporting from the African Evaluation Association’s 7th international conference, where BetterEvaluation is supporting a strand of conference presentations and posters on methodological innovation.
Development aid is changing rapidly – so must development evaluation. This is the second post in our series of innovation in development evaluation. Thomas Winderl, an evaluation consultant and co-author of ‘Innovations in monitoring and evaluating results’ explains why evaluation needs to keep pace with an increasing understanding of complexity in development planning, why multi-level mixed methods will be the new norm, and why evaluators need to get more imaginative about primary data collection. Read part one in this series here.
This is the first in a series of blogs on innovation which includes contributions from Thomas Winderl and Julia Coffman. The series will lead up to the African Evaluation Association conference at the beginning of March in Yaounde, Cameroon, where BetterEvaluation will be sponsoring a strand on methodological innovation.
BetterEvaluation hosted a webinar this week with Sonal Zaveri and Mallika Samaranayake of the Community of Evaluators South Asia, on working with children in evaluation.
Last week Michael Quinn Patton shared the first five of his top ten trends in qualitative evaluation methods over the past decade. This week he finishes the series and identifies four challenges and opportunities he sees for qualitative approaches in the future.
Children are one of the most vulnerable groups to work with, meaning there's a lot to consider when planning an evaluation that involves children. For example, is the evaluation of children's knowledge, feelings and actions; are we doing evaluations with children; or is the evaluation by children. Involving children requires a different set of skills and tools, especially if the evaluation is to lead to the children's own reflection and empowerment.
My 2014 evaluation events calendar was launched in earnest this week with a workshop hosted by the US Institute of Medicine focusing on evaluation methods and considerations for large-scale, complex, multi-national, global health initiatives - such as the Global Fund or PEPFAR. I was invited by the organisers to present the BetterEvaluation Framework to frame the discussions of the two-day event, and together with Patricia, Greet and other BetterEvaluation colleagues we developed an engaging presentation.