At the recent 35th conference of the Canadian Evaluation Society in Ottawa I shared my favourite Canadian contributions to evaluation which could be useful more broadly for addressing global challenges in evaluation.
Challenge 1: Lack of use of evaluation in decisions and actions
Despite the increasing attention to evaluation, evaluation is still too often not a part of the way decisions get made and actions are taken. I think this is partly due to a view of evaluation as something that is done at the end of an activity, and something that is mostly done for discrete, short-term projects, rather than seeing it as an integrated part of ongoing programmes.
Canadian evaluation has considerable guidance and experience to offer here, as evaluation has for many years been integrated into ongoing management and service delivery.
Burt Perrin has provided useful overviews of lessons to be learned from previous experiences in using performance measurement. His 2002 report to the OECD summarised common challenges in performance measurement.
This report offers evidence that countries are moving away from evaluating the performance of government on activities, inputs and outputs, and focusing instead on a results-driven approach. The report focuses on identifying the practical steps, best practices and and learning that will aid governments in moving towards an outcome approach that is tailored towards the political, social and historical contexts of individual countries.
Steve Montague has for many years explored how performance monitoring and measurement can complement discrete evaluations. His classic 1998 paper argued for including “reach” in logic models – being clear about who is being engaged in an intervention.
More recently he has explored how performance monitoring and evaluation can be used for complex health interventions where simple logic models are not sufficient.
Ian Davies in a classic article from 1999 discussed ways that evaluation and performance management in government could be complementary. His 2013 presentation at the European Parliament discussed ways that evaluation could strengthen accountability and learning.
Sarah Earl, Fred Carden, Terry Smutylo developed a way of working with partners in complicated programmes with long causal chains to plan, monitor and evaluate their shared contribution to results. (See the approach Outcome Mapping and a complete guide here).
In addition to individuals, Canadian public sector organisations have provided useful guidance.
The Treasury Board of Canada Secretariat Centre of Excellence in Evaluation have provided guidance including evaluation policy and standards for evaluation.
Office of the Auditor-General of Canada have produced reports on managing for results and evaluating the effectiveness of programmes.
Challenge 2: Insufficient collaboration in evaluation
It’s becoming increasingly clear to me that evaluation only works as a “team sport” – when there is constructive partnership between all the involved parties, whether it’s an evaluation done by an internal team or by an external contractor. Canadian evaluators have lots to teach us about how to support effective working relationships with internal evaluators, external evaluators, between managers and evaluators, and between the different stakeholders involved in an evaluation.
Arnold Love produced Internal Evaluation: Building Organizations from Within which focused on internal evaluation – the particular issues faced by internal evaluation units which are internal to the overall organisation but external to the programme being evaluated.
Brad Cousins and Bessa Whitmore through deep experience exploring the rationale and processes of participatory evaluation and collaborative inquiry have provided a framework for thinking about the types of collaboration that can be chosen.
Challenge 3: Overly narrow evaluation questions
An unfortunate result from some versions of evidence-based policy and practice has been a narrow focus on “what works”. Evaluation needs to provide information about “what works for whom, in what circumstances and how” in order to address equity concerns (since what works on average is not necessarily beneficial to the most disadvantaged) and to enable findings to be translated to new settings.
Challenge 4: Need for pragmatic, eclectic combinations of methods, designs and approaches
Particularly in the area of impact evaluation, there is increasing awareness of the need for approaches which can be used for interventions where classic experimental or quasi-experimental designs are not possible.
John Mayne has developed the contribution analysis approach, which takes an iterative approach to collecting and analysing data to build a strong narrative of the contribution of an intervention to observed results. You can find more information and examples on the BetterEvaluation approach page Contribution Analysis.
If you'd like to explore more Canadian contributions to evaluation, check out the presentations and workshop materials from the recent 2014 conference and access articles from the Canadian Journal of Program Evaluation.
What other Canadian contributions to evaluation should we celebrate - and add to the BetterEvaluation site? Contribute in the comments below.
Featured image: Canada, by Mike Gabelmann, on Flickr