Communicating evaluation to non-evaluators

By
Simon Hearn
Magnifying glass graphic with two character graphics and speech bubbles inside the lens

Overseas Development Institute (ODI) has published a “10 things to know about evaluation” infographic, in support of the International Year of Evaluation.

I was part of the team that drafted it and over 9 months, 8 meetings and 16 revisions I discovered just how difficult it can be to communicate a complicated set of ideas to a non-expert audience.

The challenge of speaking to different audiences parallels the challenges of communicating evaluation findings

The publication is aimed at people engaged with or interested in international development, but who don’t really know what evaluation is – or how it can be used. Our thinking in this field has been shaped through years of conducting evaluations, our involvement in BetterEvaluation, our work with the Impact Evaluation Methods Lab and our long-standing hosting of the Outcome Mapping Learning Community. We wanted to bring together what we’ve learned, dispel some of the myths about evaluation and break down barriers between the programme implementers and the evaluators.

The tricky thing was, we knew evaluators would be one of the first groups to jump on it – either because they want to pick it apart or (hopefully) because they see it as useful for communicating evaluation to clients and colleagues.

And so, it had to be technically sound but jargon-free. For this reason, we had to abandon most technical definitions. While the communications specialists were protecting the accessibility of the messages, the evaluation specialists were protecting the nuances and accuracy.

In many ways, this paralleled the challenges of communicating evaluation findings: how do you capture variation and nuance, while presenting a concise set of clear messages?

Striking the right tone was important to delivering our message

We were aiming for a positive tone, which was actually quite hard since most of the points came from negative experiences of bad or mediocre evaluations. If we were writing the ‘10 Don’ts of Evaluation’ we probably would have been done months ago, including things like:

  • Don’t claim success based on unsystematic analysis of biased data.
  • Don’t use ‘cookie-cutter’ approaches, not thinking about purpose, questions or context.
  • Don’t start with an answer and then search for data to support it.
  • Don’t treat evaluation as a one-off study undertaken only at the end of a project.

These experiences divided us. On one hand, we wanted to present evaluation as a professional field. It has methods, skills, competencies and technical standards that professional associations around the world uphold rigorously. On the other hand, we wanted to communicate it as something accessible to everyone, especially those managing and implementing programmes. In the end, we had to think back to our target audience. These fallacies wouldn’t mean much to someone who is new to the field.

Talking about success and failure is risky business

One of the important messages was that success and failure are not black and white. Rarely will an intervention be 100% ‘successful’. Some aspects might have worked at that time, in that place and for that particular group, but others might not. Or not yet, or perhaps not with the measures used. The job of an evaluation is to bring all this evidence together and apply transparent criteria (developed through a transparent process) to make an overall judgement. It’s the understanding of what worked and what didn’t work, where, when and for whom that leads to learning and improvement.

It’s important to recognise the risks in using such value-laden words as ‘success’ and ‘failure’ in a world where so often ‘failure is not an option’. Funders and senior staff often explicitly or implicitly expect projects to demonstrate high impact, cost-effective results. Speaking about success and failure can exacerbate fear driving project implementers away from good quality, technically competent evaluation, towards informal, unsystematic forms. The latter may give them greater control but ultimately provides a shaky foundation for decision-making.

Evaluation practice can be highly political

But to keep our messages simple, we had to brush over many of these contentious issues. In the end, we feel it retains just enough of what we hoped to say and communicates it in a way that gives it a good chance of being read by new audiences.

Now that we’ve put our list out there, we’d like to hear from colleagues what you would put on your list. What would you say about evaluation to your non-evaluator friends and colleagues and how would you say it?