Drawing on interviews with 19 UK evaluation commissioners and contractors, this paper investigates the role of evaluation commissioning in hindering the take-up of complexity-appropriate evaluation methods and explores ways of improving this.
We’re continuing our series, sharing ideas and resources on ways of ensuring that evaluation adequately responds to the new challenges during the pandemic.
One common criticism of Theory of Change is that it is often used as a framework that fixes agreements rather than as a living, guiding tool that helps reflection and adaptation. However, formally agreed Theories of Change and realities on the ground can be very different. This policy brief explores this, and looks at the interactions between formally agreed Theories of Change and actual advocacy practice, within the context of a multi-country advocacy programme.
This working paper series explores how monitoring and evaluation can support good adaptive management of programs. While focused especially on international development, this series is relevant to wider areas of public good activity, especially in a time of global pandemic, uncertainty and an increasing need for adaptive management.
This is the second paper in the BetterEvaluation Working Paper Series, Monitoring and Evaluation for Adaptive Management. It explores the history, various definitions and forms of adaptive management, including Doing Development Differently (DDD), Thinking Working Politically (TWP), Problem-Driven Iterative Adaption (PDIA), and Collaboration, Learning and Adaption (CLA). It also explores what is needed for adaptive management to work.
This paper is the first in the BetterEvaluation Monitoring and Evaluation for Adaptive Management working paper series. While focused especially on international development, this paper is relevant to wider areas of public good activity, especially in a time of global pandemic, uncertainty and an increasing need for adaptive management.
Evaluation needs to respond to the changes brought about by the Covid-19 pandemic. As well as direct implications for the logistics of collecting data and managing evaluation processes, the pandemic has led to rapid changes in what organisations are trying to do and how evaluation can best be used to support these changes.
The Covid-19 pandemic has led to rapid changes in the activities and goals of many organisations, whether these relate to addressing direct health impacts, the consequential economic and social impacts or to the need to change the way things are done. Evaluation needs to support organisations to use evidence to plan these changes, to implement them effectively, and to understand whether or how they work – in short to articulate an appropriate theory of change and use it well.
Organisations around the world are quickly having to adapt their programme and project activities to respond to the COVID-19 pandemic and its consequences. We’re starting a new blog series to help support these efforts. Over the next few weeks, we’ll be exploring some of the key issues and questions to be addressed. We’ll be structuring these around the seven clusters of tasks in the BetterEvaluation Rainbow Framework: MANAGE, DEFINE, FRAME, DESCRIBE, UNDERSTAND CAUSES, REPORT AND SUPPORT USE. We’ll also be creating a complementary thematic area on the BetterEvaluation website to gather this information and associated resources in a more permanent and accessible manner. We see this as a work in progress – new guidance and resources are being developed rapidly as the evaluation community comes together to support one another in this global crisis.
Complexity evaluation framework: Recognising complexity & key considerations for complexity-appropriate evaluation in the Department for Environment, Food and Rural Affairs (DEFRA)
Defra (the UK Department for Environment, Food and Rural Affairs) commissioned CECAN (the Centre for Evaluation Complexity Across the Nexus) to deliver a Complexity Evaluation Framework (CEF). The primary purpose of this framework is to equip Defra commissioners of evaluation (which may include analysts and policy makers), with a checklist of core considerations to ensure that evaluations are robust and sufficiently consider the implications of complexity theory. The framework is intended to increase the use and usability of evaluation for both commissioned and internally-led evaluation across the department. The final output is intended to be an actionable complexity evaluation framework, accompanied by a supporting evidence report to be used as a resource in commissioning evaluation.