Search
151 results
Filter search resultsHow to design an M&E framework for a policy research project
This Methods Lab guidance note focuses on the designing and structuring of a monitoring and evaluation framework for policy research projecRecursoPrinciples-focused evaluation the GUIDE
How can programs and organizations ensure they are adhering to core principles—and assess whether doing so is yielding desired results?Recurso‘Context Matters’ framework for improving evidence use: what do policymakers and practitioners think about it?
This blog introduces the 'Context Matters' framework - a living tool that builds on and contributes to learning and thinking on evidence-informed policy making, by providing a lens through which to examine the context (internal and externalRecursoSuccessful public policy: Lessons from Australia and New Zealand
This book is a collection of 20 examples of successful public policies in Australia and New Zealand. It aims to reset the agenda for teaching, research and dialogue on public policy performance.RecursoWeek 44: Anecdote as epithet - Rumination #1 from qualitative research and evaluation methods
The 4th edition of Qualitative Research and Evaluation Methods by Michael Quinn Patton will be published in mid-November, 2014. A new feature is one personal “rumination” in each chapter.BlogWeek 47: Rumination #3: Fools' gold: the widely touted methodological "gold standard" is neither golden nor a standard
This week's post is an abbreviated version of a "rumination" from theBlog4 tips for planning your policy research M&E
In this guest blog post, Tiina Pasanen, from the Overseas Development Institute (ODI), lays out four key ideas to keep in mind when designing an M&E framework for a policy research projectBlogMaking a difference: M&E of policy research
The paper presents examples and approaches on conducting M&E of policy research from the current experience of a range of research institutes, think tanks and funding bodies.RecursoTools for policy impact: A handbook for researchers
The Overseas Development Institute (ODI), as part of its Research and Policy in Development (RAPID) programme, has been looking at the links between research and policy for several years.RecursoPublic impact fundamentals and observatory
The Public Impact Fundamentals are a framework developed by the Centre for Public Impact to assess what makes a successful policy outcome and describe what can be done to maximise the chances of achieving public impact.RecursoMultiple lines and levels of evidence
Multiple lines and levels of evidence (MLLE) is a systematic approach to causal inference that involves bringing together different types of evidence (lines of evidence) and considering the strength of the evidence in terms of different indMétodoJournals and logs
Journals and logs are forms of record-keeping tools that can be used to capture information about activities, results, conditions, or personal perspectives on how change occurred over a period of time.MétodoIntegrity
Integrity refers to ensuring honesty, transparency, and adherence to ethical behaviour by all those involved in the evaluation process.MétodoCultural competency
Cultural competency involves ensuring that evaluators have the skills, knowledge, and experience necessary to work respectfully and safely in cultural contexts different from their own.MétodoFeasibility
Feasibility refers to ensuring that an evaluation can be realistically and effectively implemented, considering factors such as practicality, resource use, and responsiveness to the programme's context, including factors such as culture andMétodoInclusion of diverse perspectives
Inclusion of diverse perspectives requires attention to ensure that marginalised people and communities are adequately engaged in the evaluation.MétodoIndependence
Independence can include organisational independence, where an evaluator or evaluation team can independently set a work plan and finalise reports without undue interference, and behavioural independence, where evaluators can conduct and reMétodoEvaluation accountability
Evaluation accountability relates to processes in place to ensure the evaluation is carried out transparently and to a high-quality standard.MétodoTransferability
Transferability involves presenting findings in a way that they can be applied in other contexts or settings, considering the local culture and context to enhance the utility and reach of evaluation insights.MétodoUtility
Utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.MétodoProfessionalism
Professionalism within evaluation is largely understood in terms of high levels of competence and ethical practice.MétodoPropriety
Propriety refers to ensuring that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in it and those affected by its results.MétodoSystematic inquiry
Systematic inquiry involves thorough, methodical, contextually relevant and empirical inquiry into evaluation questions. Systematic inquiry is one of the guiding principles of the American Evaluation Association:MétodoTransparency
Transparency refers to the evaluation processes and conclusions being able to be scrutinised.MétodoEthical practice
Ethical practice in evaluation can be understood in terms of designing and conducting an evaluation to minimise any potential for harm and to maximise the value of the evaluation.MétodoAccuracy
Accuracy refers to the correctness of the evidence and conclusions in an evaluation. It may have an implication of precision.MétodoAccessibility
Accessibility of evaluation products includes consideration of the format and access options for reports, including plain language, inclusive print design, material in multiple languages, and material in alternative formats (such as online,MétodoCompetence
Competence refers to ensuring that the evaluation team has or can draw on the skills, knowledge and experience needed to undertake the evaluation.MétodoOutcome harvesting
Outcome Harvesting collects (“harvests”) evidence of what has changed (“outcomes”) and, working backwards, determines whether and how an intervention has contributed to these changes.Enfoque52 weeks of BetterEvaluation: Week 16: Identifying and documenting emergent outcomes of a global network
Global voluntary networks are complex beasts with dynamic and unpredictable actions and interactions. How can we evaluate the results of a network like this? Whose results are we even talking about?BlogValidation workshop
A validation workshop is a meeting that brings together evaluators and key stakeholders to review an evaluation's findings.MétodoHuman rights and gender equality
Human rights and gender equality refer to the extent to which an evaluation adequately addresses human rights and gender in its design, conduct, and reporting.Método