Evaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluation not only relates to individual evaluation studies; evaluation’s utility for adding value to the organisation also depends on the degree to which an evaluation and learning culture is embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can offer, particularly for helping understand what it, the organisation, is achieving and where and how improvements can be made.
Useful evaluation is therefore characterised by two important features; (1) the quality and thus credibility of discrete evaluation studies and (2) the extent to which it is embedded in an organisational culture that wants to know what it is achieving and where and how improvements can be made.
In many organisations and public administrations around the world, it is becoming common practice to assign the management of evaluation to a person - either full time or as a part of other tasks - or to a dedicated internal evaluation unit. Such a person or unit can play a key role in: (1) assuring the quality and usefulness of individual studies; AND, (2)helping to build a sustainable evaluation culture from within through identifying, institutionalising and managing supportive structures and processes.
Building an evaluation culture from within is certainly a challenge! So much so that I wanted to document some of the strategies I know colleagues from around the world have used to take up the challenge. Together with John Mayne, we invited several evaluation managers to tell us their stories. We then put together a book providing real life examples of what is being done (see Läubli Loud, Marlène and Mayne, John, 2014 “Enhancing Evaluation Use: Insights from Internal Evaluation Units” Sage Publications USA). In this blog today, I want to bring to your attention just a few of the examples discussed in our book. These, then, beg the question what competencies do evaluation managers need to have in order to take on this twofold role? Some thoughts on this conclude my blog.
Challenge 1: Engaging the organisation’s executive and/or senior management
Involving the executive and/or senior management of the organisation in evaluation is strategically desirable, but not always easy to achieve; top executives are often already over-committed and preoccupied elsewhere. Without such support, however, the need for evaluation is likely to be (periodically) questioned, especially in times of austerity and budget cutbacks. An example of an organisational response is given in chapter 6 of our book by the Canadian Public Health Agency. It describes an exemplary holistic organisational approach that it used as a means of engaging the support and interest of the “whole” organisation, including top management. An organisational-level logic model was developed illustrating the Agency’s work at various levels from the highest right down through to units, functions as well as individual programmes or activities. Using a participatory approach, the evaluation unit gathered information to develop the ‘bigger picture’; it worked hand-in-hand with an external consultant and a working group of staff members from the different parts of the organisation. Through engaging in the discussions, employees in the various units learned how their work could contribute to change and secure the organisation’s ultimate outcomes. The idea ultimately, was to have an overarching results framework that could be used to strengthen a results-based focus to its work across the whole of the organisation.
The logic model was developed and refined several times based on feedback from the discussion held each time along the way. The model shows programme activities, reach, expected outcomes. It incorporates “several notable innovations in logic modelling: spheres of influence (Montague, Young, & Montague, 2003), explicit inclusion of reach (Montague, 2000; Montague & Porteous, 2012; Montague, Porteous & Sridharan, 2011) and framing stakeholder engagement as an outcome rather than a process or activity (Porteous & Birch-Jones, 2007, based on work with Montague).” (see p. 128, in Läubli Loud & Mayne, 2014)
The overarching model was then used to develop more detailed ones, particularly for horizontal functions. For example, a logic model for the policy function (see Figure 6.3, p. 130 in Läubli Loud & Mayne, 2014) which, of course, was particularly relevant to senior managers.
Although there was some initial resistance from senior management to the concept, the programme staff was quick to engage even with the “draft” model for planning their programme and performance monitoring activities. Even though it is still in its early days, the authors feel that “over time, such a tool should cultivate more coherence and consistency in the Agency's results storyline – up, down, and across the organization.” (p.132, in Läubli Loud,& Mayne, 2014)
Challenge 2: Independence in the Evaluation Process
Establishing an evaluation unit that is independent, structurally, functionally and operationally from the influence and interests of line management and/or external partners increases the credibility and impartiality of evaluators’ ability to "speak truth to power”. Engaging an “independent external” evaluation team is seen to be the best assurance of independence and objectivity. But “objectivity” and “bias” are not synonymous and there are many opportunities for bias to be introduced at various points throughout the evaluation process, even when the evaluation is outsourced to an external team (For more on this, see Scriven, 1975).
So, what strategies are being used by internal evaluation units to rise to this challenge? Involving a broad range of stakeholders throughout the evaluation process is a strategy touched on by several of the books’ contributing authors. In my own chapter I refer to the evaluation of a highly politically sensitive nature. The study was indeed commissioned to an external evaluation team, but from the onset, representatives of a broad range of stakeholder groups – each with their own interest and potential bias –were invited to take part in an Evaluation Advisory Committee and follow the process from beginning to end: formulating the evaluation questions and agreeing the Terms of Reference; doing periodic reviews of progress made - such as discussing and agreeing the theory of change-; deliberating over and contextualising the conclusions and recommendations; and, finally suggesting the ways and means for translating these into action. The arrangement was able to deal with conflicting interests by compelling the group to reach a collective consensus.
Challenge 3: Addressing evaluation manager competencies
Evaluation managers are key in ensuring that evaluations are useful and used (see, for example, Love, 1993; Mayne, 2008; Owen, 2003; Preskill and Torres, 1999, Russ-Eft and Preskill, 2009). Their role is often poorly understood, under-resourced and undervalued so that, inevitably, there is no clear career path for them to follow. It is also frequently assumed that they have been trained in / are experienced in doing evaluations. This is not necessarily the case. Yet, little effort has been made to professionalize their work and provide them with the competencies they need to carry out their tasks effectively.
Because of the vital role they can play in shaping the valuing and use of evaluation within their organisations, several professional associations have been paying more attention to determining what evaluation managers need to know, and the skills and competencies they need to have. The Swiss Evaluation Society (www.SEVAL.ch), for example, will shortly publish a document on its website entitled Evaluation Manager Tasks, Challenges and Competencies including a list of the essential competencies. (anticipated to be published in Spring 2015, once the website's look has been updated).
To conclude, the few examples presented above provide some idea of the thoughtful strategies being used by evaluation managers to face the challenging realities of managing evaluations today. Each demonstrates how best these managers / units identify and seize opportunities to enhance the use and value of evaluation within their organisation (at both institutional and individual levels).
Read Marlene's book
Presentations on the book
OPTION Evaluation Competencies
TASK - Support use
Montague, S. (2000). "Focusing on inputs, outputs, and outcomes: Are international approaches to performance management really so different?" Canadian Journal of Program Evaluation 15,1:139-148.
Montague, S. and Porteous, N.L. (2012). “The case for including reach as a key element of program theory.” Evaluation and Program Planning, Special Issue on Evaluation and Health Inequities.
Montague, S., Porteous, N.L. and S. Sridharan (2011). “The need to build reach into results logic and performance frameworks”, 2011 Fifteenth Annual PPX Symposium, Ottawa, ON.
http://www.ppx.ca/download/learning_events/2010-2011/January2011/LE_Jan2... (accessed May 14, 2011).
Montague, S., Young G., and C. Montague (2003). “Circles tell the performance story”. Canadian Government Executive 2:12-16.
Porteous, N.L. and J. Birch-Jones (2007). “Getting to engagement: What it is and how it can be measured”, Canadian Evaluation Society Annual General Meeting, Winnipeg, MB.
http://www.evaluationcanada.ca/distribution/20070606_porteous_nancy_birc... (accessed June 21, 2011).
Sandison, P. (2006). “The Utilisation of Evaluations” in: ALNAP Review of Humanitarian Actions, Overseas Development Institute, London.
Scriven, M. (1975). Evaluation bias and its control, University of California, Berkeley
Book Reviews of Enhancing Evaluation Use
Armstrong, A. (2014). Evaluation Journal of Australasia, Vol. 14 No 1 pp 43-45.
Dahler-Larson, P. (2014). in: American Journal of Evaluation, on line publication 7 November. Downloadable at: http://aje.sagepub.com/content/early/2014/11/07/1098214014557321
Picciotto,R. (2013). Evaluation in Organizations : A Book Review, UK Evaluation Society. Downloadable at: www.oecd.org/dac/evaluation/Picciotto-book-review.pdf