Search
148 results
Filter search resultsDFAT design and monitoring and evaluation standards
These updated design, monitoring and evaluation standards from the Australian Government aim to "improve the quality and use of Design and M&E products, and to integrate evaluative thinking into everyday work".RessourceQuizlet flashcard software
Quizlet is an online flashcard-generating platform, which can be used to quickly make a pack of data cards. This resource and the following information was contributed by Alice Macfarlan.RessourceDIY M&E: A step-by-step guide to building a monitoring and evaluation framework
This guide, written by Dana Cross of Grosvenor Management Consulting, gives an overview of how to create an M&E framework.RessourceConference on Improving the use of M&E - Keynote speech by Marlene Läubli Loud
This keynote presentation given by Marlene Läubi Loud at the CDI Conference 2014: Improving the use of M&E processes and findings presents the current state of affairs regarding the utilisation of M&E processes anRessourceAcinonyx cervidae hircus: Child-led evaluation of the Building Skills for Life programme in Cambodia
This report presents a child-led evaluation of a multi-sectoral programme in Cambodia seeking to empower adolescent girls and address the challenges they face accessing quality education.RessourceOkiko in pursuit of a snail: Child-led evaluation of the building skills for life programme in Kenya
This report is the third in this series and presents a child-led evaluation of a multi-sectoral programme in Cambodia seeking to empower adolescent girls and address the challenges they face accessing quality education.RessourceEvaluation led by children
This is a discussion originally posted in the Gender and Evaluation communityled by Rituu B Nanda regarding Laura Hughston's report which presents a child-led evaluation of a multi-sectoral programme in Cambodia seekingBlogTen steps to a results based monitoring and evaluation system. A handbook for development practitioners
This guide provides a ten-step model to help plan, design, and implement a results-based M&E system for good management in organisations working in the public sector.RessourceMultiple lines and levels of evidence
Multiple lines and levels of evidence (MLLE) is a systematic approach to causal inference that involves bringing together different types of evidence (lines of evidence) and considering the strength of the evidence in terms of different indMethodeJournals and logs
Journals and logs are forms of record-keeping tools that can be used to capture information about activities, results, conditions, or personal perspectives on how change occurred over a period of time.MethodeIntegrity
Integrity refers to ensuring honesty, transparency, and adherence to ethical behaviour by all those involved in the evaluation process.MethodeCultural competency
Cultural competency involves ensuring that evaluators have the skills, knowledge, and experience necessary to work respectfully and safely in cultural contexts different from their own.MethodeFeasibility
Feasibility refers to ensuring that an evaluation can be realistically and effectively implemented, considering factors such as practicality, resource use, and responsiveness to the programme's context, including factors such as culture andMethodeInclusion of diverse perspectives
Inclusion of diverse perspectives requires attention to ensure that marginalised people and communities are adequately engaged in the evaluation.MethodeIndependence
Independence can include organisational independence, where an evaluator or evaluation team can independently set a work plan and finalise reports without undue interference, and behavioural independence, where evaluators can conduct and reMethodeEvaluation accountability
Evaluation accountability relates to processes in place to ensure the evaluation is carried out transparently and to a high-quality standard.MethodeTransferability
Transferability involves presenting findings in a way that they can be applied in other contexts or settings, considering the local culture and context to enhance the utility and reach of evaluation insights.MethodeUtility
Utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.MethodeProfessionalism
Professionalism within evaluation is largely understood in terms of high levels of competence and ethical practice.MethodePropriety
Propriety refers to ensuring that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in it and those affected by its results.MethodeSystematic inquiry
Systematic inquiry involves thorough, methodical, contextually relevant and empirical inquiry into evaluation questions. Systematic inquiry is one of the guiding principles of the American Evaluation Association:MethodeTransparency
Transparency refers to the evaluation processes and conclusions being able to be scrutinised.MethodeEthical practice
Ethical practice in evaluation can be understood in terms of designing and conducting an evaluation to minimise any potential for harm and to maximise the value of the evaluation.MethodeAccuracy
Accuracy refers to the correctness of the evidence and conclusions in an evaluation. It may have an implication of precision.MethodeAccessibility
Accessibility of evaluation products includes consideration of the format and access options for reports, including plain language, inclusive print design, material in multiple languages, and material in alternative formats (such as online,MethodeCompetence
Competence refers to ensuring that the evaluation team has or can draw on the skills, knowledge and experience needed to undertake the evaluation.MethodeOutcome harvesting
Outcome Harvesting collects (“harvests”) evidence of what has changed (“outcomes”) and, working backwards, determines whether and how an intervention has contributed to these changes.Approach52 weeks of BetterEvaluation: Week 16: Identifying and documenting emergent outcomes of a global network
Global voluntary networks are complex beasts with dynamic and unpredictable actions and interactions. How can we evaluate the results of a network like this? Whose results are we even talking about?BlogValidation workshop
A validation workshop is a meeting that brings together evaluators and key stakeholders to review an evaluation's findings.MethodeHuman rights and gender equality
Human rights and gender equality refer to the extent to which an evaluation adequately addresses human rights and gender in its design, conduct, and reporting.MethodeStrengthening national evaluation capacities
Strengthening national evaluation capacities refers to the ways in which an evaluation can have broader value beyond a single evaluation report by increasing national capacities.MethodeValidity
Validity refers to the extent to which evaluation findings are correct.Methode