Simon Hearn continues BetterEvaluation’s theme on the monitoring and evaluation of policy change by suggesting a set of measures to help those struggling to monitor the slippery area of policy influence and advocacy. For more on this theme, see Josephine Tsui’s blog on attribution and contribution in the M&E of advocacy and Julia Coffman’s on innovations in advocacy evaluation.
Infelizmente acredito que sim.
No ano passado conheci um grupo de avaliadores brasileiros em uma conferencia, e soube através deles a crescente demanda por bons estudos avaliativos no Brasil, mas também a necessidade de mais iniciativas de capacitação nesta área, e informações práticas e relevantes escritas em um contexto local, em língua portuguesa. Foi então que resolvi pesquisar em que este grupo tão entusiasmado estava trabalhando. Não somente grupos no Brasil, mas em outros países de língua portuguesa.
One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people. Sometimes the same word is used but to mean quite different things; other times different words are used to mean the same thing. And, most importantly, many people are simply unaware that other people use these words in these different ways.
We’ve talked before on this blog about evaluating advocacy interventions. One of the hottest debates is how and to what extent it is possible to establish causation in advocacy programmes. Here, Josephine Tsui, a Research Officer at ODI and co-author of a new report ‘Monitoring and evaluation of policy influence and advocacy’, explains the issues and how we can approach this thorny issue.
In February, BetterEvaluation hosted a webinar on working with children in evaluation. Mallika Samaranayake and Sonal Zaveri of the Community of Evaluators-South Asia, presented their participatory approach to conducting evaluations of, with and by children. During the webinar, Mallika and Sonal answered a number of questions from the audience: here we’ve selected some of the highlights.
Continuing our season of blogs on presenting evaluation findings in ways that will get them read (and hopefully used), Joitske Hulsebosch, an independent consultant, contributes her ideas on how to present your findings in the form of an infographic. Catch up on recent contributions from Rakesh Mohan and Patricia Rogers on sharing evaluation findings.
A few weeks ago we responded to a question from BetterEvaluation user Rituu B. Nanda on interesting ways of presenting data in evaluation reports. The conversation continued on the American Evaluation Association LinkedIn group. This week we're sharing some ideas from Rakesh Mohan on ways of making evaluation reports more interesting. Rakesh is Director at the Office of Performance Evaluations, Idaho State Legislature. He discusses how his team presented the findings of different evaluations which were intended for both policy-makers and public audiences.
Alan Mountain supports BetterEvaluation while he completes his Masters of International Development at RMIT University. In this blog, he looks at which resources have been most helpful to him as a new-comer to evaluation, both to understand the essentials and dive into more detail on different aspects.
This week we start the first in an ongoing series of Real-Time Evaluation Queries, where BetterEvaluation members ask for advice and assistance with something they are working on, together we suggest some strategies and useful resources - and then we find out what was actually useful (or not) and why.
Recently BetterEvaluation member Rituu B. Nanda asked us for advice on producing interesting evaluation reports:
Innovation is a relative concept. It is about new practice … for the topic and person or group in question. The context-specific nature of what constitutes ‘an innovation’ became clear during a recent event around global transparency and accountability efforts.