What is evaluation to do? Ways of responding to the climate and environment crisis – Part 2

Background photograph of a logged forest with title text overlaid "Footprint evaluation: What can evaluation do? Part 2"

This blog is the second of a 2-part series on the issues raised about COP26 in papers published in the journal Evaluation.

6. Improve the analysis of causal packages working in different contexts

Several papers discussed the need for a better understanding of how climate change interventions work in particular contexts – and how and why they might not work. Robert Picciotto recommended developing a ‘theory of no change’ in addition to a theory of change – identifying the factors that hindered the achievement of intended results.

Eleanor Chelimsky drew a parallel between evaluations of poverty programs in the USA in the 1960s, which failed due to external factors:

“climate change efforts, given their vastly greater size and complexity, will need to understand and account for those unmeasured factors – such as current context, history, politics and especially impacted cultures – as never before in evaluation, to my knowledge, if we are to succeed”

Understanding how interventions work or fail to work will require stepping up:

Evaluator competencies have not kept up: few evaluators are adept at model building, or familiar with the new causality methods that have begun to displace the expensive and cumbersome randomised evaluation methods (Robert Picciotto)

Climate change interventions, and climate change impacts of other interventions, make the limitations of the ‘gold standard’/evidence hierarchy approach more serious. Suppose we only include evidence from studies where it has been possible to create or identify a counterfactual (such as a control group). In that case, we will only build evidence about interventions where this design is possible – such as programs focused on individual behaviour change rather than national policies.

By contrast, the recent report ‘Working Towards a Greener Global Recovery’ by the Independent Evaluation Office of the Global Environment Facility shows a more practical and valid approach to causal inference of national and regional level interventions:

“Credible claims of contribution will be made if (1) the intervention is logically and feasibly designed to directly or indirectly result in the desired benefits as outlined in the theory of change; (2) the intervention is implemented as designed; (3) the immediate results occur as expected in the causal chain; and (4) other rival explanations for the results have either been considered and rejected, or their relative role in making a difference to an observed result has been adequately recognised”. (Seventh Comprehensive Evaluation of the GEF – Approach Paper, p. 20).

7. Provide better support for adaptation and learning

One of the ways that evaluation practice needs to change is to be more useful for adaptation and learning rather than long cycles of building knowledge about ‘what works’ and then scaling this up. This is needed because of the urgency of learning to act effectively and partly due to the need for local adaptation to meet localised and changing contexts:

“The typical cycle for a programme or project is 5–7 years from conception to conclusion. The 2030 Paris Agreement goals are little more than one project cycle away. As evaluation typically occurs at the conclusion and perhaps midpoint of interventions, evaluation advice from interventions starting now may not be available until 2030. We require systematic and active adaptive management informed by more timely evaluation efforts to address sustainability.” (Andy Rowe)

Such a change does not involve simply doing the same sorts of evaluations but on shorter timeframes. It involves finding ways of actively and effectively supporting better decisions and actions under conditions of ongoing uncertainty. Evaluation for adaptive management and management that addresses complexity needs to move beyond being a niche interest.

8. Engage with supporting use and combatting misinformation

There were mixed messages about how best to support use, reflecting their different roles and experiences.

Robert Piciotto, among others, urged evaluators to engage as advocates for processes, equity and justice

“… the widely prevailing notion that advocacy and evaluation are incompatible should be revisited. Principled advocacy is a value commitment to participatory decision- making, social equity and environmental justice. Rather than acting as pipers compelled to play commissioners’ tunes, evaluators should seek independent funding that allows alliances with progressive advocacy groups. This would increase the effectiveness of advocacy movements and facilitate the utilisation of evaluation findings. (Robert Picciotto)

For Eleanor Chelimsky, given her experience leading an evaluation unit reporting to Congress, it was essential not to be seen as an advocate but to be conscious of the broader political picture.

“Glasgow finds us with countless special-interest groups already arrayed against many climate change measures, silos erected internationally to impede the sharing of data and misinformation now a very common element of popular thinking. We will, of course, as always, need to bolster evaluation’s credibility with prestigious intellectual networking, avoid any kind of advocacy and prepare strong defences against expected lobbying. But it is also important for evaluators to have a very good understanding of both sides of the partisan divide if we hope to affect it.”

Scott Chaplowe suggested that evaluators might play more of a role in combatting misinformation

As a profession built on reliable and credible evidence, evaluation is uniquely positioned to counter the spread of misinformation. Evaluators have a responsibility to speak truth to power and champion science, evidence-based data and facts beyond the findings of any particular evaluation.

9. Engage and work with different experts and expertise

Evaluators are going to need to learn how to engage with different experts and expertise – including a wider range of disciplines and sectors and local and Indigenous experts:

To offer relevant solutions, we need to work with civil engineers, city planners, geographers, local leaders and Indigenous knowledge holders. We must seek to juxtapose and integrate complementary bodies of skills and knowledge and embrace the complexity of whole system thinking.” (Astrid Brouselle)

“To play their role effectively in the climate space, evaluators need to partner with specialists in the various domains of the field, notably natural scientists with understanding of atmospheric processes and ecosystem functioning, as well as engineers working on technological innovation.” (Juha Uitto)

“Evaluators alone cannot affect change and save the world but in partnership with others – policymakers, researchers, scientists, project proponents, civil society, journalists and so on – we can contribute to better solutions and spread much-needed evaluative thinking.” Juha Uitto)

We also need to get better at intergenerational engagement

We need to work intergenerationally, across the different generations of evaluators. Intergenerational alliances should build on the skills and knowledge of both established and still emerging evaluators who are beginning to find their voice. (Weronika Felcis)

Final thoughts

These changes in evaluation practice will also require changes in how evaluation terms of reference are developed and managed, how evaluation teams are formed and selected, and how evaluators and evaluation managers are trained.

These changes in evaluation practice will also require changes in how evaluation terms of reference are developed and managed, how evaluation teams are formed and selected, and how evaluators and evaluation managers are trained.

They will also need changes in the broader machinery of evaluation to ensure that resources are available to evaluate the climate impacts of wider interventions and that evaluations are not always controlled by the funding organisations. Work by the International Evaluation Academy to secure funding to support independent evaluations could be a valuable strategy to support this.

These lessons present significant challenges to individuals and organisations. We all need to do our bit. Read the full papers here and see which of the recommended lessons are ones you might be willing to learn.

 

Contributors to the special issue

  • Rob D. van den Berg, past President of International Development Evaluation Association (IDEAS), former Director of the Independent Evaluation Office of the Global Environment Facility
  • Dennis Bours , coordinator of the Adaptation Fund: Technical Evaluation Reference Group (AF-TERG) secretariat
  • Astrid Brousselle, professor and director of the School of Public Administration at the University of Victoria, Canada
  • Jindra M. Čekan, political economist who works in international development
  • Scott G. Chaplowe, evaluation and strategy specialist
  • Eleanor Chelimsky, independent consultant  and former director of the US General Accounting Office
  • Ian C. Davies, evaluation consultant, past President of EES
  • Weronika B. Felcis,  Latvia University, Board member of EES and Evalpartners
  • Timo Leiter, independent consultant on climate change adaptation and monitoring and evaluation
  • Debbie Menezes, chair of the Adaptation Fund: Technical Evaluation Reference Group
  • Robert Picciotto, former Vice President and Director General of the Independent Evaluation Group of the World Bank, and a founding board member of the International Evaluation Academy
  • Patricia J. Rogers, consultant, Member of the Footprint Evaluation Initiative;
  • Andy Rowe, evaluation consultant, former President and a Fellow of the Canadian Evaluation Society
  • Juha I. Uitto, Director of the Global Environment Facility Independent Evaluation Office.

'What is evaluation to do? Ways of responding to the climate and environment crisis – Part 2' is referenced in: