Search
150 results
Filter search resultsKnight lab - storytelling tools
This suite of tools is useful for creating highly interactive, beautiful representations of data.ResourceMulti-stakeholder partnerships: Building blocks for success
This report provides an evidence-based assessment of the performance of multi-stakeholder partnerships for sustainable development concluding that the overall performance of partnerships is mixed at best, and discussing factors that increasResourceProving and Improving the Impact of Development Partnerships - 12 Good Practices for Results Measurement
This report summarises 12 good practices of results measurement in development partnerships with the private sector, and includes a number of case studies and practical examples.ResourcePartnership Indicators: Measuring the effectiveness of multi-sector approaches to service provision
This paper provides considerations for the creation of partnership indicators for tri-partite partnerships (private sector, public sector and civil society/NGOs) in water and sanitation provision for poor communities in developing countriesResourceCanva
Canva is a very simple, free to use, online infographic creation platform. It has a drag and drop interface and a range of templates that you can adapt.ResourceOutcome monitoring in large multi-stakeholder research programmes: Lessons from PRISE
This guest blog by Tiina Pasanen and Kaia Ambrose discusses how the Pathways to Resilience in Semi-arid Economies (PRISE) project approached the challenge of coming up with&nbsBlogMulti-stakeholder partnerships guide - Online portal
This online portal includes over useful 60 tools and methods especially selected to support and evaluate multi-stakeholder partnership processes.ResourceDealing with paradox – Stories and lessons from the first three years of consortium-building
This case study documents and reflects upon the building of the Consortium of British Humanitarian Agencies (recently re-named START Network)ResourceHow can we assess the value of working in partnerships?
Tiina Pasanen (Overseas Development Institute) shares her reflections from the 2016 'M&E on the Cutting Edge' Conference Partnering for Success, and asks, how do we learn what type of partnerships work well, under what conditions anBlogMultiple lines and levels of evidence
Multiple lines and levels of evidence (MLLE) is a systematic approach to causal inference that involves bringing together different types of evidence (lines of evidence) and considering the strength of the evidence in terms of different indMethodJournals and logs
Journals and logs are forms of record-keeping tools that can be used to capture information about activities, results, conditions, or personal perspectives on how change occurred over a period of time.MethodOutcome monitoring and learning in large multi-stakeholder research programmes: lessons from the PRISE consortium
This discussion paper outlines the key lessons to emerge from designing and applying an outcome monitoring system to the Pathways to Resilience in Semi-arid Economies (PRISE) project.ResourceIntegrity
Integrity refers to ensuring honesty, transparency, and adherence to ethical behaviour by all those involved in the evaluation process.MethodCultural competency
Cultural competency involves ensuring that evaluators have the skills, knowledge, and experience necessary to work respectfully and safely in cultural contexts different from their own.MethodFeasibility
Feasibility refers to ensuring that an evaluation can be realistically and effectively implemented, considering factors such as practicality, resource use, and responsiveness to the programme's context, including factors such as culture andMethodInclusion of diverse perspectives
Inclusion of diverse perspectives requires attention to ensure that marginalised people and communities are adequately engaged in the evaluation.MethodIndependence
Independence can include organisational independence, where an evaluator or evaluation team can independently set a work plan and finalise reports without undue interference, and behavioural independence, where evaluators can conduct and reMethodEvaluation accountability
Evaluation accountability relates to processes in place to ensure the evaluation is carried out transparently and to a high-quality standard.MethodTransferability
Transferability involves presenting findings in a way that they can be applied in other contexts or settings, considering the local culture and context to enhance the utility and reach of evaluation insights.MethodUtility
Utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.MethodProfessionalism
Professionalism within evaluation is largely understood in terms of high levels of competence and ethical practice.MethodPropriety
Propriety refers to ensuring that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in it and those affected by its results.MethodSystematic inquiry
Systematic inquiry involves thorough, methodical, contextually relevant and empirical inquiry into evaluation questions. Systematic inquiry is one of the guiding principles of the American Evaluation Association:MethodTransparency
Transparency refers to the evaluation processes and conclusions being able to be scrutinised.MethodEthical practice
Ethical practice in evaluation can be understood in terms of designing and conducting an evaluation to minimise any potential for harm and to maximise the value of the evaluation.MethodAccuracy
Accuracy refers to the correctness of the evidence and conclusions in an evaluation. It may have an implication of precision.MethodAccessibility
Accessibility of evaluation products includes consideration of the format and access options for reports, including plain language, inclusive print design, material in multiple languages, and material in alternative formats (such as online,MethodCompetence
Competence refers to ensuring that the evaluation team has or can draw on the skills, knowledge and experience needed to undertake the evaluation.MethodOutcome harvesting
Outcome Harvesting collects (“harvests”) evidence of what has changed (“outcomes”) and, working backwards, determines whether and how an intervention has contributed to these changes.Approach52 weeks of BetterEvaluation: Week 16: Identifying and documenting emergent outcomes of a global network
Global voluntary networks are complex beasts with dynamic and unpredictable actions and interactions. How can we evaluate the results of a network like this? Whose results are we even talking about?BlogValidation workshop
A validation workshop is a meeting that brings together evaluators and key stakeholders to review an evaluation's findings.MethodHuman rights and gender equality
Human rights and gender equality refer to the extent to which an evaluation adequately addresses human rights and gender in its design, conduct, and reporting.Method