The BetterEvaluation Resource Library contains hundreds of curated and co-created resources related to managing, conducting, using, and strengthening capacity for evaluation.
You can use the search field and filtering on this page to find resources that you are interested in or you can browse our extensive list. An alternative way to find resources best suited to your needs is to explore the Rainbow Framework, where you can find resources relating to evaluation methods, approaches and tasks.
What counts as good evidence?
This paper, written by Sandra Nutley, Alison Powell and Huw Davies for the Alliance for Useful Evidence, discusses the risks of using a hierarchy of evidence and suggests an alternative in which more complex matrix approachesPathways to advance professionalisation within the context of the AES
This report by Greet Peersman and Patricia Rogers for the Australasian Evaluation Society (AES) identifies four potential pathways towards professionalisation within the context of the AES. These pathways are as follows:Evaluator competencies: The South African Government experience
This article describes the South African government's process of developing evaluator competencies.Applying a human rights and gender equality lens to the OECD evaluation criteria
This publication responds to the need for practical guidance for evaluators, evaluation managers, and programme staff to incorporate a human rights and gender equality lens into the six OECD evaluation criteria: relevance, coherence, effectNavigating competing demands in monitoring and evaluation: Five key paradoxes
In this article, Marijn Faling, Sietze Vellema, and Greetje Schouten report on five paradoxes in monitoring and evaluation, each encompassing two competing logics. This resource was contributed by Marijn Faling.The art and craft of bricolage in evaluation
This CDI Practice Paper, by Tom Aston and Marina Apgar, makes the case for ‘bricolage’ in complexity-aware and qualitative evaluation methods.Rethinking rigour to embrace complexity in peacebuilding evaluation
This 2024 open-access journal article presents the inclusive rigour framework and applies it to three cases of peace-building evaluation.Challenges and strategies for implementers and evaluators working in conflict settings
This article addresses logistical, methodological, and ethical challenges in conflict zones, offering strategies through implementation science frameworks like RE-AIM and CFIR.Equitable evaluation in remote and sensitive spaces
The article examines equitable evaluation in remote and sensitive spaces, using case studies to highlight the application of the Equitable Evaluation Framework™ in DRG programs.Ethical research landscapes in fragile and conflict-affected contexts: Understanding the challenges
The paper critiques ethical guidelines for research in conflict and fragile contexts, exploring systemic injustices and advocating for comprehensive ethical practices across all phases of the research lifecycle, from desigSynthesis of evaluations in South Sudan: Lessons learned for engagement in fragile and conflict-affected states
This article synthesizes evaluation reports from South Sudan to inform decision-making in fragile states, highlighting the need for better project design, flexibility, and long-term commitment to enhance sustainability andQuick tips to assess the risks of AI applications in monitoring and evaluation: EvalSDGs Insight #19
This Evaluation Insight from EvalSDGs succinctly lays out risks associated with using artificial intelligence (AI) in monitoring and evaluation (M&E).Monitoring and accountability practices for remotely managed projects implemented in volatile operating environments
This report explores monitoring and accountability practices for remotely managed projects in volatile environments, highlighting a trend of remote management as a long-term approach rather than a temporary solution.Back to the drawing board: how to improve monitoring of outcomes
This paper explores challenges in outcome monitoring for humanitarian interventions, emphasizing the need for adaptive learning, better resource management, and overcoming sectoral silos.Being practical, being safe: Doing evaluations in contested spaces
This paper offers practical guidance for conducting evaluations in conflict zones, focusing on safety, ethics, and adaptability.Building statistical capacity in fragile and conflict-affected states
This paper reviews the IMF’s efforts to improve statistical capacity in FCV states, highlighting challenges in data collection due to instability and offering case studies on capacity development and data quality impDiscussion note: Third-party monitoring in non-permissive environments
This guidance note outlines the use of third-party monitoring (TPM) in non-permissive environments, providing strategies for effective data collection in inaccessible or insecure areas.Evaluation under occupation: The role of evaluators in protecting and promoting social justice and equality in conflict-affected and fragile contexts (the case of the occupied Palestinian territory)
This paper explores the role of evaluators in promoting social justice in conflict-affected settings, focusing on the occupied Palestinian territory.Estudio de brechas entre necesidades y oferta de programas para desarrollar capacidades de monitoreo y evaluación en América Latina y el Caribe
El objetivo de esta investigación de CLEAR LAC fue estudiar las brechas entre las necesidades y las ofertas de capacitación, a fin de diseñar un programa de capacitación en monitoreo y evaluación (M&E) relevante y pertinente.Reimagining the language of engagement in a post-stakeholder world
This article explores how the term "stakeholder" can unintentionally reinforce colonial narratives and systemic inequities.Evaluation Matters: Knowledge brokering and use of evidence in tackling Africa’s challenges
This edition of eVALUation Matters addresses the challenges of encouraging the use of evaluation and evidence in decision-making and aligning knowledge needs with what is available and relevant to the African context.Evaluation literacy: Perspectives of internal evaluators in non-government organizations
This paper explores how internal evaluators in Australian non-government organisations (NGOs) develop and promote evaluation literacy to enhance evaluation use and organisational learning.What can we learn from qualitative impact evaluations about the effectiveness of lobby and advocacy? A meta-evaluation of Dutch aid programmes and assessment tool
This paper presents the results of a meta-evaluation that studied evaluations of lobby and advocacy (L&A) programs across Asia, Africa and Latin America.Local Ownership in Evaluation: Moving from Participant Inclusion to Ownership in Evaluation Decision Making
This briefing paper explores how local ownership can be extended to evaluation processes, not just programme design or delivery.