A few months ago we started gathering data on the user experience (UX) of the BetterEvaluation website. We developed user personas to describe our primary audiences, sent out a UX survey, and we've recently finished a series of interviews and observation studies. We've learnt a huge amount about the BetterEvaluation community and the areas of the website that work well/can be improved, and today I'll be sharing a few key parts of our process and how you can stay involved as we move forward!
This post is based on a paper by Joanna Farmer and Dr Caroline Tomiczek (Associate Director, Urbis), presented at the AES International Evaluation Conference in Canberra on 6 Sept 2017.
We've had a number of great resource contributions come in over the past couple of weeks and so we thought we'd take the time to highlight them here. BetterEvaluation relies on the contributions of members to share and co-create knowledge about monitoring and evaluation, and we feel extraordinarily privileged to be a part of a community of people who are working together to help improve evaluation practice around the world.
This is the second of a two-part blog on strategies to support the use of evaluation, building on a session the BetterEvaluation team facilitated at the American Evaluation Association conference last year. While the session focused particularly on strategies to use after an evaluation report has been produced, it is important to address use before and during an evaluation.
What can be done to support the use of evaluation? How can evaluators, evaluation managers and others involved in or affected by evaluations support the constructive use of findings and evaluation processes?
We've now completed the first component of our user research - the user survey. Thank you for the helpful feedback from 50 different countries and from a wide range of users, including evaluators, people who sometimes do evaluation, evaluation managers and users, people involved in evaluation capacity strengthening, students, and others. (If you missed the chance, we're always pleased to get feedback through our contact form).
This guest blog is by Anne Markiewicz, Director of Anne Markiewicz and Associates, a consultancy that specialises in developing Monitoring and Evaluation Frameworks. Anne is the co-author, with Ian Patrick, of the text book ‘Developing Monitoring and Evaluation Frameworks’ (Sage 2016). She has extensive experience in the design and implementation of monitoring and evaluation frameworks for a wide range of different initiatives, building the capacity of organisations to plan for monitoring and evaluation.
Chris Lysy, of Lysy Design, (and also known as Fresh Spectrum's 'evaluation cartoonist'), recently made our day by storifying an example of a logic model Patricia Rogers had previously created for the UNICEF Impact Evaluation Series in Brief 2: Theory of Change. With a few simple changes, Chris has managed to turn a rather static diagram into something that is more visually appealing and understandable to stakeholders.
He's kindly let us share it with you here, and you can find the original post along with more of Chris' writing on data design on the Lysy Design website along with help in telling the story of your data if you need it.
We've received a lot of useful feedback since launching our user experience survey. Alongside the positive, we've gained insight into some of the difficulties experienced when using and navigating the BetterEvaluation website.
Thank you to everyone who has completed the survey! All of this feedback has been incredibly helpful for us to understand what's working well and what needs improvement.