This blog is an abridged version of the brief Innovations in evaluation: How to choose, develop and support them, written by Patricia Rogers and Alice Macfarlan. It builds on a webinar delivered by Patricia Rogers in May 2018 as a joint project of UNICEF, BetterEvaluation and EVALSDGs. This blog opens up some of the issues and questions about why and how to adopt innovations in evaluation, while the brief goes into further detail about innovations that can be useful in addressing long standing challenges in evaluation.
In this blog, I wanted to share three examples of communication plan templates that address this and allow for more detail and thinking through of the communication and dissemination process. I think that each of the templates has merit in their own ways, but I’d love to hear your thoughts on whether you think they’re useful, and what processes or discussions you have had about communicating evaluation findings on projects you’ve worked on. What level of effort or thought about communication do you typically put in? Are there any barriers to communicating evaluation results that you’ve come across? What’s worked and what hasn’t?
This guest blog by Marlène Läubli Loud aims to start a discussion about what advisory group practices work well in what situations. Marlène looks back on her experiences and outlines some of the conditions that she believes have contributed to securing the “best value” from advisory groups, and asks for other ideas and examples for engaging and utilising advisory groups to their full advantage.
In this blog post, Jessica Noske-Turner introduces a newly launched section of the BetterEvaluation website - the Evaluating C4D Resource Hub - and discusses how and why this new area was developed.
On April 16 over a thousand communication for development (C4D) researchers and practitioners descend on Indonesia for the Social and Behaviour Change Summit (SBCC). Among them will be members of the Evaluating C4D research team: Professor Jo Tacchi (Loughborough University), Dr Jessica Noske-Turner (University of Leicester), Dr Linje Manyozo (RMIT University), and Rafael Obregon and Ketan Chitnis (UNICEF C4D).
A few months ago we started gathering data on the user experience (UX) of the BetterEvaluation website. We developed user personas to describe our primary audiences, sent out a UX survey, and we've recently finished a series of interviews and observation studies. We've learnt a huge amount about the BetterEvaluation community and the areas of the website that work well/can be improved, and today I'll be sharing a few key parts of our process and how you can stay involved as we move forward!
This post is based on a paper by Joanna Farmer and Dr Caroline Tomiczek (Associate Director, Urbis), presented at the AES International Evaluation Conference in Canberra on 6 Sept 2017.
We've had a number of great resource contributions come in over the past couple of weeks and so we thought we'd take the time to highlight them here. BetterEvaluation relies on the contributions of members to share and co-create knowledge about monitoring and evaluation, and we feel extraordinarily privileged to be a part of a community of people who are working together to help improve evaluation practice around the world.
This is the second of a two-part blog on strategies to support the use of evaluation, building on a session the BetterEvaluation team facilitated at the American Evaluation Association conference last year. While the session focused particularly on strategies to use after an evaluation report has been produced, it is important to address use before and during an evaluation.