52 weeks of BetterEvaluation: Week 27: How can evaluation make a difference?

Simon Hearn

I'm sure most of our readers will agree that the goal of evaluation is not the fulfilment of a contract to undertake a study but the improvement in social and environmental conditions: evaluators really do want to see their evaluations used for positive, productive purposes.

In these days of information overload it is not enough, then, to expect that a published evaluation report will be a sufficient strategy to inform or influence these improvements.

So what can be done to move from a situation where evaluation reports sit on shelves gathering dust – or worse; they are misused – how can we move from this to a situation where evaluations contribute to “social betterment”?

This blog summarises a webinar hosted by the American Evaluation Association and BetterEvaluation in May 2013.

You can watch the webinar, access downloadable slides, and get a full overview of the webinar series below:

Report and support use of findings

BetterEvaluation series of AEA Coffee Break webinars

I want to propose three roles evaluators and commissioners of evaluation: Dissemination, Engagement and Influencing:

  • DISSEMINATION is something most of us will be familiar with. After evaluation outputs are produced they are packaged and sent out in various forms to various people (or platforms) to communicate findings. This is more than posting a report to a contact list and can include things like presenting at conferences, sending postcards of key messages or writing a blog post.
  • ENGAGEMENT is more involving and is about working alongside the intended users of the evaluation be they the clients, programme team or communities to support their understanding and how they might use them. Following the Utilization Focused Evaluation approach, engagement means involving the users in the decision making around the evaluation process right from the beginning.
  • INFLUENCING involves using the findings of the evaluation to attempt to bring about change in the wider system (archived link), particularly those not engaged as primary users. For some this role crosses into the realm of advocacy and should be kept very separate from the role of evaluator. Some, however, understand that the evaluation inevitably leads the evaluator to pick a side (PDF) and to make a value commitment to a particular ideal.

So what should we as evaluators or commissioners of evaluation be aiming for? The balance of these roles will vary for each individual and probably for each evaluation – depending on the particular values of that individual and the purposes of the evaluation. The BetterEvaluation Rainbow Framework can help us decide what to do by walking us through five evaluation tasks and a number of options for each:

  1. The first task is to identify the reporting requirements. Sometimes there are clear requirements already defined in the Terms of Reference but even in these cases it is important for the evaluator and commissioner to talk these through and make sure there is clarity and agreement. Deciding the requirements requires that we walk in the footsteps of the users of the evaluation: Who are they? What are their information needs? How can reporting be most useful to them? What timeframe is required? Will drafts need reviewing? By whom when? One idea which can help manage multiple outputs is a communication plan which describes the different outputs, their audiences, purposes, requirements and drafting, review and publication deadlines.
  2. The second task is to decide what types of reporting formats will be most appropriate for the intended users, which can require thinking outside of the box. This reminds me of a story a researcher once told me. They were discussing a research report with the local community in Zimbabwe who were participants in the research. The mothers in the community told the researcher “if you send us a stack of paper reports we will burn them to cook porridge for our children”. And when asked what would be more appropriate they said they would prefer that the research be presented through community theatre so that they could situate themselves in the research findings. This is just one example; you might also consider using postcards, websites, blogs, video, presentations, posters, cartoons, poetry, infographics – the list goes on. A really useful resource is this book by Rosalie Torres, Hallie Preskill and Mary Piontek.
  3. The third task is to ensure that whatever outputs we produce are easy to access and use for different users. We don’t want to be like a maze where our outputs are difficult to find and navigate through. We want to make sure all users can get the information they need as simply as possible. You might start by simplifying the report layout as much as possible and there are good design principals you can apply which can help to focus on the key information and direct readers to what they are looking for. You may also need to think about supporting people with disabilities like visual imparement and colour blindness. There are some standard guides you can use for this.
  4. The fourth task is to develop recommendations. Now, not all commissioners require the evaluator to develop recommendations but some do. In these cases we have to think through how they will be developed and by whom. To ensure that recommendations are as practical and relevant as they can be, it can be a good idea to develop them in collaboration with the users – or at least include a process of testing and feedback. There are a variety of facilitation techniques and technologies that can help with this process.
  5. Finally, we have supporting use. In addition to engaging intended users in the evaluation process, how will you support them after the final outputs are produced and delivered? How are you going to put the evaluation to work and help with implementation of the findings?

There are plenty more suggestions for this and for the other tasks and more guidance is provided in the section.


'52 weeks of BetterEvaluation: Week 27: How can evaluation make a difference?' is referenced in: