7 Strategies to improve evaluation use and influence - Part 2

ImprovingEvalUseInfluenceP2.png

This is the second of a two-part blog on strategies to support the use of evaluation, building on a session the BetterEvaluation team facilitated at the American Evaluation Association conference last year.

While the session focused particularly on strategies to use after an evaluation report has been produced, it is important to address use before and during an evaluation.

In last week’s blog I discussed 3 strategies:

  1. Identify intended users and uses early on
  2. Anticipate barriers to use
  3. Identify key processes and times when findings are needed - and consider a series of analysis and reporting cycles

Here are 4 more strategies to consider using and building into organisational processes:

4. Choose appropriate reporting formats and ensure accessibility

There are many exciting new methods for reporting the findings from evaluations.  Making the right choices can increase the likelihood that they will know about the findings and understand what they mean and why they are important.

It’s likely that a variety of different knowledge products and reporting processes will be needed throughout the evaluation period and after its formal completion.

For example, evaluation managers who are using an evaluation for symbolic purposes might want a large report with substantial technical appendices to demonstrate its credibility. A brief, plain-language summary of findings might be appropriate to support discussion with non-technical stakeholders, including community members about the implications of findings for changes to processes and policies or resource allocation. 

At our session at the AEA conference, Nick Petten suggested  some innovative ways of reporting results, including:

  • developing an interactive webpage on the evaluation client’s webpage with evaluation results with passive, ongoing data collection to substantiate results
  • a public exhibition of the results for the community such as a permanent or semi-permanent, mural in a public space

Some other options discussed included:

  • producing a  video for reporting back to the community
  • doing joint conference presentations that involve the evaluator, the evaluation commissioner and ideally other stakeholders (such as community members, or program staff).

Read more

The BetterEvaluation site has information about a wide range of reporting formats and strategies to improve accessibility, including accommodating literacy and disability requirements.  [We’re in the process of expanding these – please share suggestions on how we can improve them]

Check out the new book by Kylie Hutchinson on Innovative Evaluation Reporting which includes even more options, including Graphic Recording, Slidedocs, and Podcasts.  You can read some pages for free through Amazon.

5. Actively and visibly follow up what happens after the evaluation

There are a number of strategies that can be embedded in organisational processes to ensure that the process of doing an evaluation (or having a evaluation done) does not end with reporting findings.  Some of these include:

  • Developing a management response to the findings, which can then be included in an evaluation report
  • Tracking responses to  recommendations including whether or not (and how) they have been implemented if accepted

At our AEA session, Stephen Axelrod suggested that evaluation could learn from the new field of implementation science, which looks at how findings from research can be applied in practice. This includes identifying the changes needed to existing practices on the basis of new information and what is needed to produce and maintain these changes.  It can include doing developmental formative evaluation on effectiveness trials to identify and overcome barriers to adoption of evidence based practices.

These activities are not necessarily undertaken by an evaluator or an evaluation team.  Instead there might be a transition process from an external evaluation that produces findings to internal processes that support change.

Read more

For more information on implementation science, this free access BMJ article (authored by Mark  Bauer, Laura Damschroder, Hildi Hagedorn, Jeffrey Smith and Amy Kilbourne) provides a useful overview.  There are also links to methods that are useful for this in the Support Use task in the Rainbow Framework.

6. Ensure there are adequate resources to support follow up activities and the development of additional knowledge products

One of the liveliest discussions at the AEA conference session was about how feasible or reasonable it was for evaluators to undertake additional work after acceptance of the last deliverable, such as producing additional reports or engaging in other processes. 

Some of the methods to mitigate this issue might be:

  • Building in a notional number of days for the evaluator to be engaged after the final report, with these days to be allocated to particular processes or developing additional material as required – or the funding allocation not used
  • Funding a subsequent project that produces additional knowledge products and/or works with people to think through specific implications of findings for their practice
  • Allocating the time of internal people to undertake these activities as part of their role in the evaluation

For example, some years ago I led a major evaluation for the Australian government which produced a number of reports about the sustainability of projects with short-term funding, including an issues paper (PDF) and a report of research into the sustained impacts (PDF) of completed projects, which found that projects with a plan for sustainability were more likely to have sustained impacts even if the project ended. My group was then engaged under a separate contract to develop a plain language version of the issues paper (PDF) to be used by projects, do additional research with local projects and report this, and then work with new projects to develop sustainability plans, drawing on these documents.

Share your experience

Do you have examples of Terms of Reference for an evaluation that include resources for multiple types of reporting, or for activities to support use after a report has been produced? Do you have examples of Terms of Reference for post-evaluation projects to repackage findings into new knowledge products or to conduct learning events

7. Document these strategies in a formal communication and dissemination plan that documents all of this – and update it as needed

Some organisations now require that evaluation plans include a plan for communicating and disseminating the findings, including providing interim results. 

Share your experience

Do you have examples of communication plans for evaluations that you could share?  What has been your experience of developing and using these plans?

The limitations of advance planning

Despite the emphasis on planning for use from the beginning , I don't want to suggest that it is possible to anticipate all the ways that evaluation findings might be useful in the future.

In a comment on last week’s blog, BetterEvaluation member Bob Williams cautioned against thinking about planning for use as if this were possible:

Is anyone else feeling uneasy about the concept of 'intended use for intended users'? It's become a mantra, but to me is an idea stuck in the 90's when we pretended that interventions operated in simple predictable environments.   …  These days, I try not to start at intended use for intended users, but start at the desired consequences (outcome) of an evaluation.  Then we work out the influences needed to achieve those consequences (generally using backcasting approaches) and then identify who could be the main people who ought to use the evaluation in an influential way.  It's not easy, and is a work in progress, but it's no different from what the designers and managers of interventions have to do.

Bob Williams,
comment on Part 1 of this series

As Bob reminds us, the process of identifying and prioritising the primary intended uses of an evaluation is not a simple, linear process that can be done at the beginning of an evaluation and then used to develop a static evaluation communication plan.   For a start, it can be difficult to identify all the potential uses for an evaluation.  Using iterative processes of reporting some data, and discussing its interpretation and implications, can help to build more capacity to use evaluations and ideally help to shape what kinds of information are being generated and how and when they are being made available.  And evaluations can have more use and impact when they are supported by internal champions who can connect potential users opportunistically.

Share your experience

What do you think of these suggested strategies?  Do you have additional strategies to recommend or good examples?  How can our evaluation practices and systems address staff turnover and  changing information needs that can produce big changes in what are seen as ‘intended uses’ and ‘intended users’ ?

Part 1 of this blog

In case you missed it, you can read part 1 of this two-part series here:

7 Strategies to improve evaluation use and influence - Part 1

What can be done to support the use of evaluation? How can evaluators, evaluation managers and others involved in or affected by evaluations support the constructive use of findings and evaluation processes?  

'7 Strategies to improve evaluation use and influence - Part 2' is referenced in: