Evaluation frameworks are often developed to provide a common reference point for evaluations of different projects that form a program, or different types of evaluations of a single program. But getting agreement on a shared document is only the start of achieving the intended benefits of evaluation frameworks, such as reduced duplication and overlap, improved data quality, and ease of aggregation and synthesis. This guest blog by George Argyrous (ANZSOG) outlines 9 actions that can be taken to support the implementation of high-level monitoring and evaluation frameworks, and make sure these frameworks don't languish on a dusty shelf.
This blog is an abridged version of the brief Innovations in evaluation: How to choose, develop and support them, written by Patricia Rogers and Alice Macfarlan. It builds on a webinar delivered by Patricia Rogers in May 2018 as a joint project of UNICEF, BetterEvaluation and EVALSDGs. This blog opens up some of the issues and questions about why and how to adopt innovations in evaluation, while the brief goes into further detail about innovations that can be useful in addressing long standing challenges in evaluation.
In this blog, I wanted to share three examples of communication plan templates that address this and allow for more detail and thinking through of the communication and dissemination process. I think that each of the templates has merit in their own ways, but I’d love to hear your thoughts on whether you think they’re useful, and what processes or discussions you have had about communicating evaluation findings on projects you’ve worked on. What level of effort or thought about communication do you typically put in? Are there any barriers to communicating evaluation results that you’ve come across? What’s worked and what hasn’t?
This guest blog by Marlène Läubli Loud aims to start a discussion about what advisory group practices work well in what situations. Marlène looks back on her experiences and outlines some of the conditions that she believes have contributed to securing the “best value” from advisory groups, and asks for other ideas and examples for engaging and utilising advisory groups to their full advantage.
In this blog post, Jessica Noske-Turner introduces a newly launched section of the BetterEvaluation website - the Evaluating C4D Resource Hub - and discusses how and why this new area was developed.
On April 16 over a thousand communication for development (C4D) researchers and practitioners descend on Indonesia for the Social and Behaviour Change Summit (SBCC). Among them will be members of the Evaluating C4D research team: Professor Jo Tacchi (Loughborough University), Dr Jessica Noske-Turner (University of Leicester), Dr Linje Manyozo (RMIT University), and Rafael Obregon and Ketan Chitnis (UNICEF C4D). Together we will launch the new Evaluating C4D Resource Hub.
A few months ago we started gathering data on the user experience (UX) of the BetterEvaluation website. We developed user personas to describe our primary audiences, sent out a UX survey, and we've recently finished a series of interviews and observation studies. We've learnt a huge amount about the BetterEvaluation community and the areas of the website that work well/can be improved, and today I'll be sharing a few key parts of our process and how you can stay involved as we move forward!
This post is based on a paper by Joanna Farmer and Dr Caroline Tomiczek (Associate Director, Urbis), presented at the AES International Evaluation Conference in Canberra on 6 Sept 2017.
We've had a number of great resource contributions come in over the past couple of weeks and so we thought we'd take the time to highlight them here. BetterEvaluation relies on the contributions of members to share and co-create knowledge about monitoring and evaluation, and we feel extraordinarily privileged to be a part of a community of people who are working together to help improve evaluation practice around the world.