We need more knowledge about how to how to do evaluation better.
We need to understand better how to choose which methods and processes to use in which situations, how to use them well, and how to develop new methods and processes to deal with emerging challenges and opportunities.
BetterEvaluation is involved in research and innovation projects to build this knowledge in collaboration with others — and also seeks to connect people involved in these efforts, and advocate for investment in research and innovation and for attention to its findings.
Here are some of our research and innovation projects:
Phase 1 funded by UNICEF
Effective monitoring of development and humanitarian projects, programmes and other initiatives is essential for improving their performance, but monitoring practice often falls short of what is required. The Global Partnership for Better Monitoring focuses on improving the monitoring function as part of a national, local or organisational monitoring and evaluation (M&E) systems approach. In the first phase, UNICEF and BetterEvaluation are working with other organisations and networks on four priority areas.
This is one of BetterEvaluation's core activities. All of our financial supporters contribute to this project, either directly or indirectly.
There are many different methods and processes that can be used in evaluation. The BetterEvaluation Rainbow Framework organises over 300 of these methods and processes into the tasks that are often undertaken in evaluation. The range of tasks are organised into seven colour-coded clusters that aim to make it easy for users to find what they need. Downloadable versions provide annotated lists of methods and processes for each task.
Funded by Australian Government, Department of Prime Minister and Cabinet.
Partners: Indigenous Community Volunteers (ICV)
This project is supported by the Australian Government to allow the BetterEvaluation team to work with Aboriginal and Torres Strait Islander people to share and promote their evaluation methods and processes and facilitate their feedback and reviews on evaluations that have been conducted in their communities or regions. A clear ethical framework is guiding this work. Once complete, users will be able to find and read about examples of good practice that have been endorsed by specific communities through selecting them on an interactive map.
Funded by Australian Research Council (LP130100176) and UNICEF
Partners: RMIT University, University of Hyderabad, UNICEF, Australian Research Council
The goal of the Evaluating C4D Resource Hub is to find the right kinds of methods, processes, tools and resources that suit your practical needs and match the approach you want to take. Users move between the Evaluating C4D Framework principles and Rainbow Framework tasks to help decide on the right methods and processes and tools for their situation.
Funded by International Development Research Centre (IDRC)
The guide aims to support decision making throughout the process of an evaluation, from planning its purpose and scope, designing it, conducting it, reporting findings and supporting use of its findings. This guide can be used for managing an evaluation that is conducted by an external evaluator or evaluation team, an internal team, or by a combination of these. It can be used for different types of evaluations and for evaluations of different types of interventions, including projects, programs, policies and clusters of projects. It can also be used for evaluation of research.
Funded by Australian Department for Foreign Affairs and Trade (DFAT)
Partners: Overseas Development Institute (ODI)
In partnership with ODI, Methods Lab is a research project developing, testing and institutionalising flexible approaches to impact evaluation.
BetterEvaluation's writeshop series is a collection of six case studies written by evaluation practitioners, and in some cases evaluation commissioners and managers, discussing a particular evaluation, the issues it addressed and the possible implications for evaluation practice and theory.