Search
14 results
Filter search resultsHow to design an M&E framework for a policy research project
This Methods Lab guidance note focuses on the designing and structuring of a monitoring and evaluation framework for policy research projecResourceDeveloping monitoring and evaluation frameworks + framework template
This book, written by Anne Markiewicz and Ian Patrick, offers a step-by-step guide to developing a monitoring and evaluation framework.ResourceDIY M&E: A step-by-step guide to building a monitoring and evaluation framework
This guide, written by Dana Cross of Grosvenor Management Consulting, gives an overview of how to create an M&E framework.ResourceAustralian Volunteers program monitoring, evaluation and learning framework
This example of a monitoring, evaluation and learning framework sets out the approach to assessing the performance of the Australian Volunteers Program. This resource and the following information was contributed by Jo Hall.ResourceWhy do programs benefit from developing monitoring and evaluation frameworks?
This guest blog is by Anne Markiewicz, Director of Anne Markiewicz and Associates, a consultancy that specialises in developing Monitoring and Evaluation Frameworks.BlogFrom paper to practice: Supporting the uptake of high-level M&E frameworks
Evaluation frameworks are often developed to provide a common reference point for evaluations of different projects that form a program, or different types of evaluations of a single program.Blog4 tips for planning your policy research M&E
In this guest blog post, Tiina Pasanen, from the Overseas Development Institute (ODI), lays out four key ideas to keep in mind when designing an M&E framework for a policy research projectBlogEvaluation framework
An evaluation framework (sometimes called a Monitoring and Evaluation framework, or more recently a Monitoring, Evaluation and Learning framework) provides an overall framework for evaluations across different programs or different evaluatiMethodRegression discontinuity
Regression Discontinuity Design (RDD) is a quasi-experimental evaluation option that measures the impact of an intervention, or treatment, by applying a treatment assignment mechanism based on a continuous eligibility index which is a variaMethodLearning alliances: An approach for building multi-stakeholder innovation systems
Millions of dollars are spent each year on research and development (R&D) initiatives in an attempt to improve rural livelihoods in the developing world, but rural poverty remains an intractable problem in many places.ResourceQuasi-experimental methods for impact evaluations
This video lecture, given by Dr Jyotsna Puri for the Asian Development Bank (ADB) and the International Initiative for Impact Evaluation (3ie), demonstrates how the use of quasi-experimental methods can circumvent the challenge of creatingResourcePublic impact fundamentals and observatory
The Public Impact Fundamentals are a framework developed by the Centre for Public Impact to assess what makes a successful policy outcome and describe what can be done to maximise the chances of achieving public impact.ResourceQuasi-experimental design and methods
This guide, written by Howard White and Shagun Sabarwal for UNICEF looks at the use of quasi-experimental design and methods in impact evaluation.ResourceUNICEF webinar: Quasi-experimental design and methods
What is the main difference between quasi-experiments and RCTs? How can I measure impact when establishing a control group is not an option?Resource