Search
15 results
Filter search resultsDFAT design and monitoring and evaluation standards
These updated design, monitoring and evaluation standards from the Australian Government aim to "improve the quality and use of Design and M&E products, and to integrate evaluative thinking into everyday work".RessourceHow to design an M&E framework for a policy research project
This Methods Lab guidance note focuses on the designing and structuring of a monitoring and evaluation framework for policy research projecRessource‘Context Matters’ framework for improving evidence use: what do policymakers and practitioners think about it?
This blog introduces the 'Context Matters' framework - a living tool that builds on and contributes to learning and thinking on evidence-informed policy making, by providing a lens through which to examine the context (internal and externalRessourceSuccessful public policy: Lessons from Australia and New Zealand
This book is a collection of 20 examples of successful public policies in Australia and New Zealand. It aims to reset the agenda for teaching, research and dialogue on public policy performance.RessourceWeek 19: Ways of framing the difference between research and evaluation
One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people.BlogSemana 19: Formas de descrever a diferença entre pesquisa e avaliação
Um dos desafios em trabalhar em avaliação é que importante termos (como "avaliação", "impacto", "indicadores", "monitoramento" e assim por diante) são definidos e usados de maneiras muito diferentes, porBlogBetterEvaluation community's views on the difference between evaluation and research
In May we blogged about ways of framing the difference between research and evaluation. We had terrific feedback on this issue from the international BetterEvaluation community and this update shares the results.Blog4 tips for planning your policy research M&E
In this guest blog post, Tiina Pasanen, from the Overseas Development Institute (ODI), lays out four key ideas to keep in mind when designing an M&E framework for a policy research projectBlogUser feedback on the difference between evaluation and research
This page contains thoughts from the BetterEvaluation community provided in response to the blog post onBlogProjects assuming responsibility over evaluation: Test-driving utilisation focused evaluation
This presentation from Developing Evaluation Capacity in ICT4D (DECI) outlines the objectives of the project and their use ofRessourceMaking a difference: M&E of policy research
The paper presents examples and approaches on conducting M&E of policy research from the current experience of a range of research institutes, think tanks and funding bodies.RessourceTools for policy impact: A handbook for researchers
The Overseas Development Institute (ODI), as part of its Research and Policy in Development (RAPID) programme, has been looking at the links between research and policy for several years.RessourcePathways to advance professionalisation within the context of the AES
This report by Greet Peersman and Patricia Rogers for the Australasian Evaluation Society (AES) identifies four potential pathways towards professionalisation within the context of the AES. These pathways are as follows:RessourcePublic impact fundamentals and observatory
The Public Impact Fundamentals are a framework developed by the Centre for Public Impact to assess what makes a successful policy outcome and describe what can be done to maximise the chances of achieving public impact.RessourceMaking rigorous causal claims in a real-life context: Has research contributed to sustainable forest management?
This article discusses an impact evaluation that examined the contribution of two forestry research centres - the Centre for International Forestry Research (CIFOR) and the Centre de Coopération Internationale en Recherche Agronomique pourRessource