Communicating evaluation to non-evaluators

Simon Hearn's picture 18th June 2015 by Simon Hearn

Overseas Development Institute (ODI) has published a “10 things to know about evaluation” infographic, in support of the International Year of Evaluation. I was part of the team that drafted it and over 9 months, 8 meetings and 16 revisions I discovered just how difficult it can be to communicate a complicated set of ideas to a non-expert audience.

The challenge of speaking to different audiences parallels the challenges of communicating evaluation findings

The publication is aimed at people engaged with or interested in international development, but who don’t really know what evaluation is – or how it can be used. Our thinking in this field has been shaped through years of conducting evaluations, our involvement in BetterEvaluation, our work with the Impact Evaluation Methods Lab and our long-standing hosting of the Outcome Mapping Learning Community. We wanted to bring together what we’ve learned, dispel some of the myths about evaluation and break down barriers between the programme implementers and the evaluators.

The tricky thing was, we knew evaluators would be one of the first groups to jump on it – either because they want to pick it apart or (hopefully) because they see it as useful for communicating evaluation to clients and colleagues.

And so, it had to be technically sound but jargon free. For this reason, we had to abandon most technical definitions. While the communications specialists were protecting the accessibility of the messages, the evaluation specialists were protecting the nuances and accuracy.

In many ways, this paralleled the challenges of communicating evaluation findings: how do you capture variation and nuance, while presenting a concise set of clear messages?

Striking the right tone was important to delivering our message

We were aiming for a positive tone, which was actually quite hard since most of the points came from negative experiences of bad or mediocre evaluations. If we were writing the ‘10 Don’ts of Evaluation’ we probably would have been done months ago, including things like:

  • Don’t claim success based on unsystematic analysis of biased data.
  • Don’t use ‘cookie-cutter’ approaches, not thinking about purpose, questions or context.
  • Don’t start with an answer and then search for data to support it.
  • Don’t treat evaluation as a one-off study undertaken only at the end of a project.

These experiences divided us. On one hand, we wanted to present evaluation as a professional field. It has methods, skills, competencies and technical standards that professional associations around the world uphold rigorously. On the other hand, we wanted to communicate it as something accessible to everyone, especially those managing and implementing programmes. In the end, we had to think back to our target audience. These fallacies wouldn’t mean much to someone who is new to the field.

Talking about success and failure is risky business

One of the important messages was that success and failure are not black and white. Rarely will an intervention be 100% ‘successful’. Some aspects might have worked at that time, in that place and for that particular group, but others might not. Or not yet, or perhaps not with the measures used. The job of an evaluation is to bring all this evidence together and apply transparent criteria (developed through a transparent process) to make an overall judgement. It’s the understanding of what worked and what didn’t work, where, when and for whom that leads to learning and improvement.

It’s important to recognise the risks in using such value-laden words as ‘success’ and ‘failure’ in a world where so often ‘failure is not an option’. Funders and senior staff often explicitly or implicitly expect projects to demonstrate high impact, cost-effective results. Speaking about success and failure can exacerbate fear driving project implementers away from good quality, technically competent evaluation, towards informal, unsystematic forms. The latter may give them greater control but ultimately provides a shaky foundation for decision-making.

Evaluation practice can be highly political. But to keep our messages simple, we had to brush over many of these contentious issues. In the end, we feel it retains just enough of what we hoped to say and communicates it in a way that gives it a good chance of being read by new audiences.

Now that we’ve put our list out there, we’d like to hear from colleagues what you would put on your list. What would you say about evaluation to your non-evaluator friends and colleagues and how would you say it?

A special thanks to this page's contributors
Research Fellow, Overseas Development Institute.


Dpbours's picture
Dennis Bours

Love it. I missed the infographic through my ODI news feed, but happy to catch it here. Immediately saved it, given it is powerful in all its simplicity.

You know, to me there is bias and there is subjectivity. I have an issue with bias, though I am open to subjectivity as long as I know what is objective, sec, and what is more subjective. People who are really close to the project often have a higher level of subjectivity towards its outcomes, but at the same time their information and - at times - anecdotal knowledge might provide unique insights as to the function of the project and its results.


A few days ago at a reception (thank you Matt Keene!) I talked to an evaluator's daughter of about 10 years old. Her mother said; "Dennis is an evaluator, just like mommie." And I said; "Do you know what an evaluator is?" And she shook her head, no. I said; "Your teacher sometimes gives you exercises, and then in the end he will take a big red pencil and he will grade your work." Yes, she recognised that. He is then evaluating your work. He gives you a grade that tells you how well you did.

"How do you feel about that, when he gets out his big red pencil?" She wasn't too happy about that. I said; "That is exactly the problem we have when we do our work. People think we come with a big red pencil to grade their work. And it scares them."

So I asked her; "How do you learn? How do you know what you do well, and what you need to improve?" Exactly, it is your teacher telling you that. And he can do so because he evaluated your work. And what we try to do is to not scare people with big red pencils. We try to make it a discussion, so that we learn about their project, about their work, and how well they did, and the people in the project learn what they did well and how they can further improve. Just like your teacher helps you learn.

Simon Hearn's picture
Simon Hearn

Thanks for sharing your story, Dennis. It reminds me of an exercise I've seen to try to get researchers to communicate their research more clearly: the grandparent test. The idea being that if you can't explain your research to your grandparent or elderly relative then you're not likely to be able to convince a policy maker. 

I like your addition about bias and subjectivity as well. I was at a meeting yesterday with an evaluation commissioner who explained that for their purpose they didn't need complicated statistical 'objective' verification; simple triangulation by asking a few informed people would be sufficient. It usually comes down to purpose, users and uses.

Simon Hearn's picture
Simon Hearn

Hi Colleen, yes we struggled with the difference between monitoring and evaluation - at least I did. I don't really have a problem with the classic definitions we used in the infographic but I don't think they are the only understandings of these notions. Many people see a more blurry line between the m and the e. E.g in some cases monitoring which only involves collection of data may be insufficient and project staff will need to employ evaluative techniques to make sense of data. Also as you mention some forms of evaluation operate very close to project implementation that may carry out regular monitoring in order to provide quick feedback to project staff. In these cases I would argue that neither can be classed purely as monitoring or evaluation but both involve a combination of m & e - and this is probably the norm in many cases.

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.