Learning from exemplars of evaluation

Birds.jpg

Are there particular examples of evaluations that you return to think about often? 

Maybe they are shining examples of what evaluation can look like when it works well – or typical cases that illustrate challenges and issues that often arise. Maybe they're from your own work or from reports of others' work.

The BetterEvaluation team has been thinking about exemplars of evaluation a lot recently in the lead up to this week's American Evaluation Association conference  in Chicago, which is focused on the theme of Exemplary Evaluations in a Multicultural World.. By exemplars we mean the examples that stick in our minds and in others’ and help us articulate what is good and better evaluation and how to achieve it.  They’re an important part of the BetterEvaluation theory of change – to identify, document, share and support learning from exemplars of evaluation from different sectors, countries and situations.  They can be used as an inspirational model, or a case to practise skills on, or a touchstone for ongoing discussion.

For me, some of the examples that I return to often are:

  • ‘Understanding Anytime’  and ‘Understanding and Involvement (U and I)’ – a consumer evaluation/participatory action research project over several years in an acute psychiatric hospital in Australia by Yoland Wadsworth, Mirinda Epstein and MaggieMcGuiness. It involved working with patients and with staff to articulate and share their perspectives about the situation and what might be done differently. I think about this whenever someone says it will be hard to do a participatory evaluation, since patients in an acute psychiatric hospital would be one of the most stigmatised and disempowered group of intended beneficiaries. I use this exemplar like a "black swan" - a rare event- and use analytical generalisation to think about how the lessons from it might be transferred to a less extreme setting.  If a participatory, consumer-focused approach could work there, it might be able to work anywhere.
  • Displaced Homemakers Project – an evaluation of a program for middle-aged women returning to the workforce after a marriage breakdown, by Michael Quinn Patton (described in an earlier edition of Utilization-Focused Evaluation. In the early stage of the evaluation, Michael reported back an analysis of their existing client data.  To make it more interesting, he asked staff to guess the results of each frequency table.  But they thought there was no point guessing on the demographic question of marital status - since they assumed all the women were divorced.  However the analysis showed that many women were separated not divorced.  Further investigation showed that program drop-outs were disproportionately separated women, and, since they were still trying to reconcile, the program was not meeting their needs. This exemplar seemed at odds with what I had understood to be involved in doing a utilisation-focused evaluation - identifying primary intended users and gathering information for their primary intended uses.  Fortunately I had a chance to ask a question about the case at a public lecture.  In his reply, Michael made it clear that identifying the primary intended use of the evaluation, and therefore what it needs to focus on, is not a matter of simply asking the primary intended users what they need to know.  Instead a supportive process is needed to identify what is already known, and to check this out, and then to have a deeper conversation about what the evaluation needs to focus on. The exemplar, and the opportunity to engage in a dialog with the author, made explicit some important aspects of practice to enact the theory.
  • CCTV in the Car Park – an example from Ray Pawson and Nick Tilley's book Realistic Evaluation about how a theft reduction program involving closed circuit television in a car park might work - and work differently in different contexts.  This is a hypothetical exemplar.  I use it frequently in my teaching to help students experience the process of identifying several different possible causal paths, then explore how they might be more or less likely in different contexts, and what the implications might be for monitoring and evaluation.  It's an effective exemplar even for people working in different sectors, because it is simple to understand, and the lessons from it, about context-dependent causal mechanisms, unintended impacts, and breaks in a causal chain, are readily transferable to different programs.

One of the ways we have supported the development of exemplars of evaluation was through a series of cases of evaluation practice -  our writeshop cases. Six writing teams developed accounts of their evaluations through a virtual writeshop process facilitated by Irene Guijt, involving peer reviews of their drafts and a common reference to the BetterEvaluation Rainbow Framework of evaluation tasks. You can find links to these resources at the end of this blog.

​At this week's AEA conference, we’ll be exploring ways of developing useful exemplars. We’ll be sharing examples of the exemplars we have used for our own learning and to support others’ learning and how they can be used in three different sessions:

Learning to learn from exemplars - strategies and examples Demonstration Fri  Nov 13, 2015 07:00 AM - 07:45 AM  Room: Randolph
Patricia Rogers and Greet Peersman

How do we learn from exemplars?  What counts as an exemplar (a model to be followed strictly, an inspiration, a typical case, or a cautionary tale?)? How do we identify, document and analyse them? How do we translate lessons from them into different contexts? This session demonstrates ways of learning from exemplars, drawing on the experiences of BetterEvaluation, an international free-access collaborative platform for learning about evaluation methods and processes across disciplines, sectors, regions and organizations. In addition to curating existing resources, it generates new material, especially identifying and documenting examples of good practice and developing guidance, and supports users to make good use of these examples and guidance and to apply them appropriately to their particular situation. The session will present some examples of exemplars, in particular a series of cases developed through a virtual writeshop process, and the processes used to develop them and support learning from them.

Building a Foundation for Exemplary Evaluation – (Teaching Evaluation Track) Think Tank Sat Nov 14 7.00-7.45am Room: Skyway 273

Patricia Rogers, Kathryn Newcomer, Doreen Cavanaugh, Ann Doucette

A cursory search on exemplary evaluation revealed numerous examples of how to evaluate exemplary programs and how to conduct exemplary self-appraisals, but little in terms of how to ensure that the evaluation effort itself is exemplary.  This Think Tank, building on discussion from AEA 2014 tackles the issue of what resources and evaluation tools are needed to craft and yield exemplary evaluations.  How does training need to change to construct and respond to evaluative environments; to meet the challenges of globalization; to incorporate complexities and new data sources (e.g., social media); to better inform decision-making and policy formulation; to support diverse and varying levels of evaluation expertise; and, to provide flexible, relevant and accessible learning opportunities? This think tank, led by faculty of The Evaluators Institute, focuses on what constitutes exemplary evaluation practice; how we can learn from what’s worked and what has not; the role of mentoring and e-coaching;and, evaluation opportunities in a global arena.

Learning from Exemplary Evaluation Failures, Presidential Strand Think Tank Sat Nov 14, 2015 08:00 AM - 09:30 AM Room: Grand Suite 3
Michael Quinn Patton,  Michael Morris, Leslie Cooksy, Stephanie Evergreen,  Patricia Rogers

The conference highlights "Learning from Evaluation's Successes."  For balance, this session would highlight "Learning from Evaluation's Failures across the Globe."   This highly interactive session will begin with five distinguished and experienced evaluators sharing a personal story of evaluation failure, essentially failure exemplars. Participants will then generate and share failure exemplar stories in small groups. A framework for learning from failure will be used to analyze and interpret shared exemplars of failure from the small groups.  This session will be an example of what has come to known as a Festival of Failure, or Fail-Forward Learning Event. Two aspects of learning from failure are important for evaluators: (1) facilitating learning from failure among the stakeholders we work with and (2) walking the talk of learning from failure by doing it ourselves.

If you’re at the AEA meeting, we’d love to see you and hear about your exemplars. Patricia Rogers, Greet Peersman and Simon Hearn will be attending the conference.

Image: "Learning to Fly". PsychoDella. https://www.flickr.com/photos/24557420@N05/9009347869

Resources