7 Strategies to improve evaluation use and influence - Part 1

ImprovingEvalUseInfluence.png

What can be done to support the use of evaluation? How can evaluators, evaluation managers and others involved in or affected by evaluations support the constructive use of findings and evaluation processes?  

This is an enduring challenge: Evaluation use was the focus of last year’s UK Evaluation Society conference and 66 sessions at last year’s American Evaluation Association conference, and it's also the focus of this year’s AEA conference on ‘Speaking Truth to Power’ and the Aotearoa/New Zealand Evaluation Association conference  on ‘Evaluation for change/Change for evaluation’.

At last year’s AEA conference, the BetterEvaluation team, with the help of Jane Davidson, convened a session involving conference participants and others through a ‘flipped conference’ format, where information was gathered and shared before the session and then discussed and added to in person. Thank you to everyone who attended virtually or in person and contributed their ideas.

While the session focused particularly on what can be done after an evaluation report has been produced, it’s important to be clear that strategies to improve the use of evaluation need to begin early in the process of doing an evaluation, and be embedded in monitoring and evaluation systems.

We’ve developed a list of strategies for before, during and after an evaluation, and actions that might be taken by different people involved in evaluation.  This week we’re sharing the 4 of these. Next week we’ll share 3 more – including discussing the issue of making sure there are the resources available for active support of use of findings after the final evaluation report has been finished.

1. Identify intended users and intended uses of the evaluation early on

Newcomers to evaluation often move straight to choosing methods for data collection; experienced practitioners and those familiar with the extensive research on evaluation use know the importance of identifying intended users and uses early on.  Many organisations require this strategy be used when planning an evaluation, and the BetterEvaluation GeneraTOR includes this in the guidance for developing a Terms of Reference for an evaluation. 

When identifying intended users, be as specific as possible, be clear about who are the primary intended users, and consider whether and how public reporting of findings might be used – for example, to encourage public officials to respond to findings. 

It’s often easier to identify intended instrumental use, where an evaluation is intended to inform a specific decision, whether about improvement or continuation/expansion, but it can be also helpful to consider if there is intended conceptual or enlightenment use (that changes the way people in a program or more broadly think about an intervention), process use (having an effect on people’s understanding, relationships and/or practices through the process of an evaluation rather than through its findings), and symbolic use (such as signalling that an intervention is working well or that it is being effectively managed).

This process might be done before an evaluation starts, by the commissioners of an evaluation as part of developing a Terms of Reference.  It can also be done as a process led by the evaluator or evaluation team and reported in an inception report.  It should be reviewed throughout an evaluation, especially a multi-year evaluation, to check if there have been changes in users and uses over time that mean the evaluation should change.

Read more

There is more information on ways of identifying primary intended users and clarifying the intended uses of an evaluation.  The approach Utilisation-Focused Evaluation pays particular attention to this issue -  all decisions about an evaluation are made on the basis of the implications for meeting the intended uses of the identified primary intended users.

2. Anticipate barriers to use

Many barriers to use have been identified, including the credibility and perceived relevance of the evaluation report(s), the resources and authority to make changes in response to findings, and the openness to receiving negative findings (that a program doesn’t work or isn’t being implemented as intended).

In some cases, it will be possible to plan the evaluation in ways that will overcome or reduce these barriers.  For example, the technique of data rehearsal can establish what would constitute credible evidence when the evaluation is being designed – this involves reviewing with primary intended users tables, graphs and quotes with hypothetical data the evaluation could produce.

Read more

There are a range of strategies that can be used to make it easier for people to receive negative findings.

3. Identify key processes and times when findings are needed –  and consider a series of analysis and reporting cycles

Chris Lysy

Keeping all the reporting to the end of an evaluation risks missing the time when decisions need to be made.  And it misses the opportunity to iteratively build understanding of and commitment to use findings.  Many evaluations are set up to fail because they are designed to deliver findings too late to inform key decisions.

Instead, key decision points and processes should be identified and the timing of evaluation reports and activities should be organised around these.  At our AEA session, Heather Britt referred to this as “baking it in.

Jade Maloney, who has recently published research on evaluation use (PDF),  shared this example of how she did this in a recent evaluation:

Over the three-year evaluation, there were three reporting phases so early findings could inform ongoing rollout. Each reporting phase had a face-to-face discussion with managers before written reporting to support shared interpretation of findings. After the final phase, there was a discussion group with key frontline staff to reality test and identify how the recommendations could be taken up. Finally, there was a joint conference presentation on evaluator recommendations and organisation response, plus progress. The multiphase approach enabled improvements to be identified, then implemented and tracked in next phase of evaluation. The systems set up wouldn't have been enough without organisational commitment to learning from the evaluation.

Read more

reporting needs analysis might be a useful process to gather and record information about what information is needed by whom, when and in what format. Techniques such as a data party and data placemats can make it easier to engage intended users with findings.

Thank you

Thank you to everyone who participated in the session, virtually or in person, and especially those who shared specific strategies:

  • Heather Britt
  • Jade Maloney
  • Michael Quinn Patton
  • Nick Petten
  • Stephen Axelrod

Part 2 of this blog

In part 2 of this blog, we share 4 more strategies to help support the use of evaluation findings. Read it here:

7 Strategies to improve evaluation use and influence - Part 2

This is the second of a two-part blog on strategies to support the use of evaluation, building on a session the BetterEvaluation team facilitated at the American Evaluation Association conference last year.

'7 Strategies to improve evaluation use and influence - Part 1' is referenced in: