7 Strategies to improve evaluation use and influence - Part 1

Patricia Rogers's picture 25th January 2018 by Patricia Rogers

What can be done to support the use of evaluation? How can evaluators, evaluation managers and others involved in or affected by evaluations support the constructive use of findings and evaluation processes?  

This is an enduring challenge: evaluation use was the focus of last year’s UK Evaluation Society conference (see presentations here) and 66 sessions at last year’s American Evaluation Association conference (you can check the Use and Influence of Evaluation track here), and it’s also the focus of this year’s AEA conference on ‘Speaking Truth to Power’ and the Aotearoa/New Zealand Evaluation Association conference  on ‘Evaluation for change/Change for evaluation’.

At last year’s AEA conference, the BetterEvaluation team, with the help of Jane Davidson, convened a session involving conference participants and others through a ‘flipped conference’ format, where information was gathered and shared before the session and then discussed and added to in person. Thank you to everyone who attended virtually or in person and contributed their ideas.

While the session focused particularly on what can be done after an evaluation report has been produced, it’s important to be clear that strategies to improve the use of evaluation need to begin early in the process of doing an evaluation, and be embedded in monitoring and evaluation systems.

We’ve developed a list of strategies for before, during and after an evaluation, and actions that might be taken by different people involved in evaluation.  This week we’re sharing the 4 of these. Next week we’ll share 3 more – including discussing the issue of making sure there are the resources available for active support of use of findings after the final evaluation report has been finished.

1. Identify intended users and intended uses of the evaluation early on

Newcomers to evaluation often move straight to choosing methods for data collection; experienced practitioners and those familiar with the extensive research on evaluation use know the importance of identifying intended users and uses early on.  Many organisations require this strategy be used when planning an evaluation, and the BetterEvaluation GeneraTOR includes this in the guidance for developing a Terms of Reference for an evaluation. 

When identifying intended users, be as specific as possible, be clear about who are the primary intended users, and consider whether and how public reporting of findings might be used – for example, to encourage public officials to respond to findings. 

It’s often easier to identify intended instrumental use, where an evaluation is intended to inform a specific decision, whether about improvement or continuation/expansion, but it can be also helpful to consider if there is intended conceptual or enlightenment use (that changes the way people in a program or more broadly think about an intervention), process use (having an effect on people’s understanding, relationships and/or practices through the process of an evaluation rather than through its findings), and symbolic use (such as signalling that an intervention is working well or that it is being effectively managed).

This process might be done before an evaluation starts, by the commissioners of an evaluation as part of developing a Terms of Reference.  It can also be done as a process led by the evaluator or evaluation team and reported in an inception report.  It should be reviewed throughout an evaluation, especially a multi-year evaluation, to check if there have been changes in users and uses over time that mean the evaluation should change.

Read more

There is more information on ways of identifying primary intended users and clarifying the intended uses of an evaluation.  The approach Utilisation-Focused Evaluation pays particular attention to this issue -  all decisions about an evaluation are made on the basis of the implications for meeting the intended uses of the identified primary intended users.

2. Anticipate barriers to use

Many barriers to use have been identified, including the credibility and perceived relevance of the evaluation report(s), the resources and authority to make changes in response to findings, and the openness to receiving negative findings (that a program doesn’t work or isn’t being implemented as intended).

In some cases, it will be possible to plan the evaluation in ways that will overcome or reduce these barriers.  For example, the technique of data rehearsal can establish what would constitute credible evidence when the evaluation is being designed – this involves reviewing with primary intended users tables, graphs and quotes with hypothetical data the evaluation could produce.

Read more

There are a range of strategies that can be used to make it easier for people to receive negative findings.

3. Identify key processes and times when findings are needed –  and consider a series of analysis and reporting cycles

Chris Lysy (FreshSpectrum):

Keeping all the reporting to the end of an evaluation risks missing the time when decisions need to be made.  And it misses the opportunity to iteratively build understanding of and commitment to use findings.  Many evaluations are set up to fail because they are designed to deliver findings too late to inform key decisions.

Instead, key decision points and processes should be identified and the timing of evaluation reports and activities should be organised around these.  At our AEA session, Heather Britt referred to this as “baking it in.

Jade Maloney, who has recently published research on evaluation use,  shared this example of how she did this in a recent evaluation:

Over the three-year evaluation, there were three reporting phases so early findings could inform ongoing rollout. Each reporting phase had a face-to-face discussion with managers before written reporting to support shared interpretation of findings. After the final phase, there was a discussion group with key frontline staff to reality test and identify how the recommendations could be taken up. Finally, there was a joint conference presentation on evaluator recommendations and organisation response, plus progress. The multiphase approach enabled improvements to be identified, then implemented and tracked in next phase of evaluation. The systems set up wouldn't have been enough without organisational commitment to learning from the evaluation.

Read more

reporting needs analysis might be a useful process to gather and record information about what information is needed by whom, when and in what format. Techniques such as a data party and data placemats can make it easier to engage intended users with findings.

Thank you to everyone who participated in the session, virtually or in person, and especially those who shared specific strategies:

  • Heather Britt
  • Jade Maloney
  • Michael Quinn Patton
  • Nick Petten
  • Stephen Axelrod

Next week we'll be publishing part two of this blog and sharing 4 more strategies to help support the use of evaluation findings.

A special thanks to this page's contributors
Author
Director of BetterEvaluation/ Professor of Public Sector Evaluation, Australia and New Zealand School of Government.
Melbourne.

Comments

Anonymous's picture
Nanfuka Nusulah

This has been helpful information

bob williams's picture
Bob Williams

Is anyone else feeling uneasy about the concept of 'intended use for intended users'?  It's become a mantra, but to me is an idea stuck in the 90's when we pretended that interventions operated in simple predictable environments.  These days it's rare that we don't have to at least tip our hat to the notion of complexity and the associated uncertainties between inputs, outputs and outcomes.  Yet we still design the evaluation as if the relationship between the input and outcome of our evaluations is mechanical rather than organic (intended use by intended users) was simple.  While we put our customers through all kinds of hoops drawing up theories of change, and criticise how bad they are at it, but I have yet to see any evaluator construct a theory of change for the impact of their evaluation.  Ironic, certainly.  Hypocritical, arguably.  A missed learning opportunity that will bring home to us how bloody difficult it is, undoubtedly.  These days, I try not to start at intended use for intended users, but start at the desired consequences (outcome) of an evaluation.  Then we work out the influences needed to achieve those consequences (generally using backcasting approaches) and then identify who could be the main people who ought to use the evaluation in an influential way.  It's not easy, and is a work in progress, but it's no different from what the designers and managers of interventions have to do.

Patricia Rogers's picture
Patricia Rogers

Thanks for the thoughtful comments, Bob.  I think I do create theories of change for evaluations, in my head at least (but then I do that for everything!).  I agree that not all the actual or important users and uses can be identified in advance.  So there needs to be an iterative and adaptive process of tailoring the evaluation. 

Some intended users and uses can be identified in advance, and are fairly stable.  Others will emerge, so the evaluation ideally needs to have some spare capacity and a suitable governance process so it can be flexible. 

And in many cases, people's understanding of their evaluation needs, and how they will use an evaluation, will change or become clearer.  So the process of identifying how an evaluation will be used is not a simple process of asking them at the start but iteratively working with them using techniques such as data rehearsal.  

bob williams's picture
Bob Williams

Perfect response Patricia.  I'm just aware that for some people 'intended use for intended users' is more of a mantra than a work in progress.

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.