International development is fixated with impact. But how do we know we’re all talking about the same thing?
Simon Hearn's blog
Overseas Development Institute (ODI) has published a “10 things to know about evaluation” infographic, in support of the International Year of Evaluation. I was part of the team that drafted it and over 9 months, 8 meetings and 16 revisions I discovered just how difficult it can be to communicate a complicated set of ideas to a non-expert audience.
Simon Hearn continues BetterEvaluation’s theme on the monitoring and evaluation of policy change by suggesting a set of measures to help those struggling to monitor the slippery area of policy influence and advocacy. For more on this theme, see Josephine Tsui’s blog on attribution and contribution in the M&E of advocacy and Julia Coffman’s on innovations in advocacy evaluation.
This two part mini-series looks at monitoring and evaluation of policy influencing and advocacy. This blog introduces a great new paper from Oxfam America exploring this topic from an NGO perspective and the second blog will present the perspective of a research programme.
This week we are focusing on mixed methods in evaluation. We'll have two further blogs on the subject, one exploring an evaluation that used mixed methods and the other asking whether we are clear enough about what mixed methods really means - there are many evaluations out there claiming to be mixed methods when all they do is supplement a qualitative survey with interview data.
I’m sure most of our readers will agree that the goal of evaluation is not the fulfillment of a contract to undertake a study but the improvement in social and environmental conditions: evaluators really do want to see their evaluations used for positive, productive purposes. In these days of information overload it is not enough, then, to expect that a published evaluation report will be a sufficient strategy to inform or influence these improvements.
So what can be done to move from a situation where evaluation reports sit on shelves gathering dust – or worse; they are misused – how can we move from this to a situation where evaluations contribute to “social betterment”?
Whether you are commissioning an evaluation, designing one or implementing one, having - and sharing - a very clear understanding of what is being evaluated is paramount. For complicated or complex interventions this isn't always as straight forward as it sounds, which is why BetterEvaluation offers specific guidance on options for doing this.
One of the most effective ways of learning about the evaluation field is to attend a conference, present your work and interact with other professionals.
This is why, this week, we are providing you with a round up of evaluation conferences taking place this year. We already talked about the Evaluation Conclave which took place in Kathmandu in February but there are many more taking place across the world.
As part of developing the BetterEvaluation site, we ran an "Evaluation Challenge" process, inviting people to submit their biggest challenges in evaluation, and then inviting experts to suggest ways to address these.
This week we present the first challenge, one that is frequently heard from people when they first start learning about the field of evaluation: