As one of the EvalPartners, we'd like to share this EvalPartners announcement about the launch of the 2017 round of "Innovation Challenges". EvalPartners is a global partnership to strengthen national evaluation capacities. In November 2015, it launched the first ever long-term global vision for evaluation, developed during EvalYear 2015 through a participatory process with the global evaluation community.
In our recent blog post about using theories of change and logic models better in evaluation, we asked BetterEvaluation members to submit a question or challenge that they have in relation to creating or using theory of change for review by the BetterEvaluation team.
We often get email enquiries asking for advice in preparing the documents used to invite evaluators to prepare proposals to do an evaluation. These documents have a variety of labels including Request for Proposal (RFP), Terms of Reference (TOR), and Scope of Work (SOW). The advice below focuses on two important aspects in this: writing a good RFP/TOR, and sharing it in ways that will create the best pool of proposals.
While there are many guidelines and tools to support those conducting evaluations, there are far fewer resources specifically focused on commissioners and managers of evaluation.
Many evaluations include a process of developing logic models and theories of change – an explanation of how the activities of a program, project, policy, network or event are expected to contribute to particular results in the short-term and longer-term. They have been used for many years - versions can be seen in Carol Weiss’ 1972 book "Evaluation research: methods for assessing program effectiveness" - and they have been mainstreamed in many organisations
In this blog we thought we'd highlight a few of the things you can do with BetterEvaluation to make your experience with the site and community better, and more useful to you.
Part of our commitment to better evaluation is making sure that evaluation itself is evaluated better. Like any intervention, evaluations can be evaluated in different ways.
This week, Arnaldo Pellini (Senior Research Fellow, Overseas Development Institute and Lead for Learning at the Knowledge Sector Initiative, Indonesia) and Louise Shaxson (Senior Research Fellow, Overseas Development Institute) reflect on some of the challenges around monitoring, evaluating and learning (MEL) from adaptive programmes.
Last week, we started our focus on Adaptive Management with a blog post by Patricia Rogers that explored how monitoring and evaluation can support adaptive management. This week, we're continuing this series with a guest blog from Fred Carden and Arnaldo Pellini, in which they discuss what they learned about adaptive management in a major project on developing capacity for evidence-based policy.
Adaptive management is usually understood to refer to an iterative process of reviewing and making changes to programmes and projects throughout implementation. Commonly associated with environment and resource management, it's becoming more common in other areas of program management and development. Over the next few weeks, we'll be focusing on the increasing interest in how monitoring and evaluation can support adaptive management.