Writing for utilisation

hands typing on a laptop

Evaluators need to communicate better and foster the utilisation of evaluation findings through clear and engaging writing.

Evaluation reports are the cornerstone of how evaluators communicate evidence, findings, and recommendations to users. There is a growing acknowledgement within the evaluation community that different audiences have different reporting needs, and evaluators are increasingly drawing on a range of reporting methods, including briefings, videos, infographics, presentations, and interactive websites. Yet reports remain the primary reporting format for evaluations. These reports are useful for conveying the detail and richness of evaluation findings, but are too often difficult to read and fall short of meeting their users' needs.

The evaluation profession has long been concerned with quality issues. For example, Schwartz and Mayne (2005) present evidence of evaluation reports' uneven quality. A recent unpublished study of evaluation reports looked at how well they were structured and how clearly and concisely they were written. It concluded that many evaluators struggle with formulating clear findings, writing concise sentences, constructing coherent paragraphs, and avoiding redundancies. These findings are consistent with our experience reading and reviewing evaluation reports over the years. Unfortunately, a poorly written report is less likely to engage users and make findings useful, which means the evaluation falls short of improving the prospects of its beneficiaries. Michael Quinn Patton's “utilisation-focused" approach to evaluation is based on the principle that an evaluation should be judged on its usefulness to its intended users. We would argue that the trick to writing a good report is to follow this principle and write with the needs of evaluation users in mind.

Below, we offer some suggestions for improving evaluation report writing. These are based on our own experience and are not exhaustive. However, we hope it will help get you thinking about how to write in a way that supports the use of your evaluation findings.

Get to the point

In the age of information overload, evaluation users demand evaluative information that is valid, credible, relevant, and reliable. When they read an evaluation report, readers generally look for the key takeaways rather than detailed descriptions of research methods or the evaluation's parameters. Regrettably, in many evaluation reports, this is exactly what readers get.

Evaluators typically spend several months focused on the evaluation design, checking the data, questioning the significance, and examining contradictory data points so it is no surprise that evaluators might focus on explaining these issues in detail in their reports. As a result, many evaluation reports describe the evaluation plan, the research methodology, and the evaluation criteria at length before finally moving on to the findings. But, most readers don't have the time or attention spans for this level of detail and would prefer to just see the main findings and then get on with their days. Instead, they often have to sift through comprehensive reports to decipher for themselves which findings are important and which are not.

User-friendly evaluation reports present only the important information and tend to run to 20-40 pages. In contrast, many, if not most, evaluation reports run to 100-200 pages. Such length can exhaust readers and divert their focus from what is important.

The solution is to exclude less relevant and less important information from your reports. Organize your report around its main messages and avoid copying and pasting material from case studies and other background work. If you must include this info, add it as an appendix.

Develop the best structure for your report

The structure of evaluation reports carries much of the blame for their length. It is common for evaluation writers to structure their reports around the evaluation's research questions or the evaluation criteria, often modelled after OECD-DAC's six evaluation criteria. Structuring reports this way inevitably creates redundancies because different evaluation criteria or research questions often generate similar findings. It can also lead to "chopped arguments," when writers assign different parts of the same larger finding into different report sections.

Evaluators should structure their reports around a small set of main messages or key findings. This requires analyzing the data and formulating main messages before sitting down to write. It also means that writing the report and answering the evaluation questions or filling in the evaluation criteria are not the same thing. This is referred to as a "message-based" approach and means your writing is dictated by previously identified findings, lessons, and recommendations. This message-driven approach helps evaluators organize their reports and communicate their messages to the reader but also limits redundancies and non-relevant or unimportant details.

Report strong findings

Upon closer read, some evaluation reports make contradictory or inconsistent statements in different sections. For example, describing the program as effective in one place while concluding that its outcomes were largely disappointing in another. This can happen for multiple reasons: evaluators may use the report writing process as part of the analytical process, analyzing the evidence at the same time as they are writing up the findings; evaluators may add the analysis of new data to a draft report; evaluators may omit a final, thorough edit; and evaluators may not be clear on what the report's key messages should be, perhaps because they haven't fully synthesized and triangulated the evaluation's evidence.

Appropriate use of nuance – discussing context, granularity, and explanatory factors – adds depth to evaluative findings. A thoughtful discussion of the study’s characteristics and limitations is both useful and important information. However, many reports go too far with this and rely on qualifying language to “play it safe” when presenting findings. For example, an evaluation might conclude that "the program's process was largely inefficient, though there were exceptions" in situations where a more straightforward and appropriate assessment would simply be that the “program was inefficient.” Qualifying language that obscures the main messages should not be confused with nuance, and evaluators should use it sparingly.

Evaluators should be direct and upfront about what the evidence and analysis find. Completing the analysis before writing the report allows evaluators to articulate these findings concisely and consistently. In our practice, we found that small-group workshops with managers and advisors are useful to discuss, distil, and clarify messages. Ultimately, evaluators should be direct and say what they mean. Evaluators shouldn't be afraid to draw conclusions about the value and worth of projects and programs. Evaluators are hired to answer difficult questions, so their answers should occasionally be difficult to hear.

Write for your users

To make evaluations utilisation-focused, reports should focus on user needs. To do so requires identifying clear findings that provoke thought, writing concise sentences that are easy to follow without jargon and redundancies, building coherent paragraphs with clear arguments supported by relevant evidence and analysis, and organizing logical and message-based report structures that are easy to navigate.

It’s important to keep the intended audiences of a report in mind when writing. This includes evaluation users such as executives and technical staff and less experienced users such as project beneficiaries and people who speak English as a second or third language. For this reason, it is a good idea to use plain language where feasible. For example, forest issues in the Global South are notoriously complex. Evaluations of these issues also tend to be complex and highly technical, making reports inaccessible to many of the indigenous and local forest communities to whom they are relevant. So consider ways you can frame your findings to ensure the information is accessible for all people who could benefit from it.

Evaluation sections that are the most important to users should also be the most accessible. Unfortunately, as the unpublished analysis of evaluation writing shows, executive summaries, conclusions, and recommendations are often the most difficult parts of the report to read. This is because these sections are often negotiated among managers and team members in revisions that don’t include editors, thereby becoming hard to follow. These sections can be improved by ensuring sentences are short, messaging is concise, and minimizing the use of jargon as final steps after discussing with managers. Where jargon is used, consider including a glossary.

Many professionals in all disciplines overuse passive voice (for example, “the package was opened,” instead of the active voice: “I opened the package”). When evaluators write in passive voice, their reports are not only less engaging, but also may omit crucial information. For example, an evaluator might write that "a decision was taken to discontinue funding window X." Evaluation users need to know whether the program's management, board, or funder decided to discontinue the funding window and for what reason. Using the active voice and including the relevant information (“The board/management/funder decided to discontinue funding window X because of…”) aids evaluation users' ability to understand program management and governance issues.

Additionally, evaluators should not skip foundational information. Evaluations often fail to explain key concepts before presenting their findings. For example, one recent evaluation presented quantitative and qualitative data that showed "organizational changes" led to certain inefficiencies. The report described these inefficiencies in detail but never actually described the organizational changes or how these led to inefficiencies. This foundational information is critical information for conveying a message.

Be aware of how the evaluation’s scope can affect its useability

Evaluation managers and commissioners can support a utilisation focus when drafting the evaluation's scope of work or terms of reference. Scopes of work are often sprawling documents with a laundry list of questions and sub-questions for evaluators to answer. For example, an evaluation question that asks, "What were the project's short-term and long-term impacts on youth, women, indigenous people, persons with disabilities, and conflict-affected persons" asks two questions about five separate subgroups, which is ten questions! When multiple evaluation questions are written this way, scopes of work can quickly balloon to 30 or more questions from an original three. Evaluators feel compelled to answer all of these questions, so the depth of analysis suffers. Evaluation commissioners and managers can support evaluators' efforts to produce relevant, thorough, and user-friendly reports by clearly and narrowly defining the evaluation’s scope of work.

Allocate time for writing, editing, and quality assurance processes

Evaluation commissioners and managers can enhance report quality by planning for multiple rounds of feedback and multiple drafts. Initial draft reports often require a significant amount of additional work to reach ideal levels of depth and clarity. However, evaluation timelines often allocate evaluators lots of time for research and data collection but relatively little time to develop messages and write a report. As a result, evaluators and evaluation editors are often rushed to meet deadlines. When this happens, first rough drafts often become final drafts. As such, evaluation commissioners need to plan for more tail-end editing and quality assurance.

So what?

To summarize, there are clear steps that evaluators can take to strengthen how they communicate their evidence and findings. These include identifying the most important messages, focusing the report structure around those messages, getting rid of unimportant details, ensuring their findings are consistent, and writing with clarity. Evaluation commissioners and managers can support utilisation orientation by focusing the scope of work and budgeting more time for tail-end quality assurance.

Resources / Further reading