Evaluations are a huge investment, in terms of both money and time. However, despite this investment, many evaluations are not used.
In this blog post, Jessica Noske-Turner introduces a newly launched section of the BetterEvaluation website - the Evaluating C4D Resource Hub - and discusses how and why this new area was developed.
On April 16 over a thousand communication for development (C4D) researchers and practitioners descend on Indonesia for the Social and Behaviour Change Summit (SBCC). Among them will be members of the Evaluating C4D research team: Professor Jo Tacchi (Loughborough University), Dr Jessica Noske-Turner (University of Leicester), Dr Linje Manyozo (RMIT University), and Rafael Obregon and Ketan Chitnis (UNICEF C4D).
A few months ago we started gathering data on the user experience (UX) of the BetterEvaluation website. We developed user personas to describe our primary audiences, sent out a UX survey, and we've recently finished a series of interviews and observation studies. We've learnt a huge amount about the BetterEvaluation community and the areas of the website that work well/can be improved, and today I'll be sharing a few key parts of our process and how you can stay involved as we move forward!
This post is based on a paper by Joanna Farmer and Dr Caroline Tomiczek (Associate Director, Urbis), presented at the AES International Evaluation Conference in Canberra on 6 Sept 2017.
We've had a number of great resource contributions come in over the past couple of weeks and so we thought we'd take the time to highlight them here. BetterEvaluation relies on the contributions of members to share and co-create knowledge about monitoring and evaluation, and we feel extraordinarily privileged to be a part of a community of people who are working together to help improve evaluation practice around the world.
This is the second of a two-part blog on strategies to support the use of evaluation, building on a session the BetterEvaluation team facilitated at the American Evaluation Association conference last year. While the session focused particularly on strategies to use after an evaluation report has been produced, it is important to address use before and during an evaluation.
What can be done to support the use of evaluation? How can evaluators, evaluation managers and others involved in or affected by evaluations support the constructive use of findings and evaluation processes?
We've now completed the first component of our user research - the user survey. Thank you for the helpful feedback from 50 different countries and from a wide range of users, including evaluators, people who sometimes do evaluation, evaluation managers and users, people involved in evaluation capacity strengthening, students, and others. (If you missed the chance, we're always pleased to get feedback through our contact form).