The BetterEvaluation Resource Library contains hundreds of curated and co-created resources related to managing, conducting, using, and strengthening capacity for evaluation.
You can use the search field and filtering on this page to find resources that you are interested in or you can browse our extensive list. An alternative way to find resources best suited to your needs is to explore the Rainbow Framework, where you can find resources relating to evaluation methods, approaches and tasks.
Evaluator competencies: The South African Government experience
This article describes the South African government's process of developing evaluator competencies.Applying a human rights and gender equality lens to the OECD evaluation criteria
This publication responds to the need for practical guidance for evaluators, evaluation managers, and programme staff to incorporate a human rights and gender equality lens into the six OECD evaluation criteria: relevance, coherence, effectNavigating competing demands in monitoring and evaluation: Five key paradoxes
In this article, Marijn Faling, Sietze Vellema, and Greetje Schouten report on five paradoxes in monitoring and evaluation, each encompassing two competing logics. This resource was contributed by Marijn Faling.What counts as good evidence?
This paper, written by Sandra Nutley, Alison Powell and Huw Davies for the Alliance for Useful Evidence, discusses the risks of using a hierarchy of evidence and suggests an alternative in which more complex matrix approachesThe art and craft of bricolage in evaluation
This CDI Practice Paper, by Tom Aston and Marina Apgar, makes the case for ‘bricolage’ in complexity-aware and qualitative evaluation methods.Rethinking rigour to embrace complexity in peacebuilding evaluation
This 2024 open-access journal article presents the inclusive rigour framework and applies it to three cases of peace-building evaluation.Quick tips to assess the risks of AI applications in monitoring and evaluation: EvalSDGs Insight #19
This Evaluation Insight from EvalSDGs succinctly lays out risks associated with using artificial intelligence (AI) in monitoring and evaluation (M&E).Pathways to advance professionalisation within the context of the AES
This report by Greet Peersman and Patricia Rogers for the Australasian Evaluation Society (AES) identifies four potential pathways towards professionalisation within the context of the AES. These pathways are as follows:Estudio de brechas entre necesidades y oferta de programas para desarrollar capacidades de monitoreo y evaluación en América Latina y el Caribe
El objetivo de esta investigación de CLEAR LAC fue estudiar las brechas entre las necesidades y las ofertas de capacitación, a fin de diseñar un programa de capacitación en monitoreo y evaluación (M&E) relevante y pertinente.Monitoring and accountability practices for remotely managed projects implemented in volatile operating environments
This report explores monitoring and accountability practices for remotely managed projects in volatile environments, highlighting a trend of remote management as a long-term approach rather than a temporary solution.Evaluation Matters: Knowledge brokering and use of evidence in tackling Africa’s challenges
This edition of eVALUation Matters addresses the challenges of encouraging the use of evaluation and evidence in decision-making and aligning knowledge needs with what is available and relevant to the African context.Reimagining the language of engagement in a post-stakeholder world
This article explores how the term "stakeholder" can unintentionally reinforce colonial narratives and systemic inequities.