We often get email enquiries asking for advice in preparing the documents used to invite evaluators to prepare proposals to do an evaluation. These documents have a variety of labels including Request for Proposal (RFP), Terms of Reference (TOR), and Scope of Work (SOW). The advice below focuses on two important aspects in this: writing a good RFP/TOR, and sharing it in ways that will create the best pool of proposals.
Many evaluations include a process of developing logic models and theories of change – an explanation of how the activities of a program, project, policy, network or event are expected to contribute to particular results in the short-term and longer-term. They have been used for many years - versions can be seen in Carol Weiss’ 1972 book "Evaluation research: methods for assessing program effectiveness" - and they have been mainstreamed in many organisations as an essential component of p
In this blog we thought we'd highlight a few of the things you can do with BetterEvaluation to make your experience with the site and community better, and more useful to you.
Part of our commitment to better evaluation is making sure that evaluation itself is evaluated better. Like any intervention, evaluations can be evaluated in different ways.
This week, Arnaldo Pellini (Senior Research Fellow, Overseas Development Institute and Lead for Learning at the Knowledge Sector Initiative, Indonesia) and Louise Shaxson (Senior Research Fellow, Overseas Development Institute) reflect on some of the challenges around monitoring, evaluating and learning (MEL) from adaptive programmes.
Last week, we started our focus on Adaptive Management with a blog post by Patricia Rogers that explored how monitoring and evaluation can support adaptive management. This week, we're continuing this series with a guest blog from Fred Carden and Arnaldo Pellini, in which they discuss what they learned about adaptive management in a major project on developing capacity for evidence-based policy.
Adaptive management is usually understood to refer to an iterative process of reviewing and making changes to programmes and projects throughout implementation. Commonly associated with environment and resource management, it's becoming more common in other areas of program management and development. Over the next few weeks, we'll be focusing on the increasing interest in how monitoring and evaluation can support adaptive management.
All too often conferences fail to make good use of the experience and knowledge of people attending, with most time spent presenting prepared material that could be better delivered other ways, and not enough time spent on discussions and active learning. With closing dates for two evaluation conferences fast approaching (the Australasian Evaluation Society and the American Evaluation Association), could you propose something more useful, that would demonstrate how much we know and care about communicating and using information?
The wonderful thing about BetterEvaluation is that it is, at its core, a platform to co-create and share knowledge about how to better conduct, use and manage evaluations.
Evaluation practitioners and managers, experts and partner organizations work together to create and learn from improved knowledge and practice in monitoring and evaluation. We support three interconnected areas of activity - capacity strengthening, M&E research and development, and the BetterEvaluation toolbox, which includes the Rainbow Framework and the BetterEvaluation resource library.
Last week we launched our newest theme page Sustained and Emerging Impacts Evaluation, authored by Jindra Cekan (Valuing Voices), Laurie Zivetz (Valuing Voices), and Patricia Rogers (BetterEvaluation/ANZSOG). The page argues for the need to go back and evaluate the impacts of a project or programme some time after the end of an intervention, and gives some advice on how to do this.
This week, we wanted to open up the floor to you and hear your thoughts about, and experiences of, embedding SEIE into evaluation practice.