Evaluation reporting is important. While there are many innovative methods to grab your audience's attention, the evaluation report is still an important vehicle to get your key messages across. In this blog, Alice Macfarlan shares her tips for editing a report draft with an audience focus in mind.
In part 1 of this two-part blog series, Greet Peersman and Patricia Rogers introduce the ‘Pathways to advance professionalisation within the context of the AES’ project and report. They explore the four pathways identified in the report: 1) Ad hoc, disconnected activities; 2) Focused, connected and strategic activities; 3) Voluntary credentialing of evaluators; and 4) Regulated and licensed profession, and discuss their recommendation that the Australasian Evaluation Society follow a pathway of focused, connected and strategic activities, with a view to considering a voluntary credentialing process down the track.
In the previous blog in this series, Greet Peersman and Patricia Rogers introduced the ‘Pathways to advance professionalisation within the context of the AES’ project and report. A major feature of this report is the exploration of 41 activities and approaches that can be used to advance the professionalisation of monitoring and evaluation, and the conclusion of this two-part series looks at these approaches in more detail. We believe these activities are likely to be of considerable interest to others who are undertaking or planning evaluation capacity strengthening activities and we encourage you to share your feedback and thoughts on these activities at the end of this blog.
This guest post from Caroline Heider (Director General and Senior Vice President, Evaluation, World Bank Group) was originally posted in the IEG's #WhatWorks blog. In this post, Caroline reflects on the motivations behind her 2017 Rethinking Evaluation blog series, which was dedicated to unpacking and debating the DAC evaluation criteria, and how this series links with the consultation process recently launched by the DAC Network on Development Evaluation to review the criteria. We've reposted this blog on BetterEvaluation because we believe that this is an important conversation to have, and we encourage our readers to get involved in the consultation process.
Evaluation frameworks are often developed to provide a common reference point for evaluations of different projects that form a program, or different types of evaluations of a single program. But getting agreement on a shared document is only the start of achieving the intended benefits of evaluation frameworks, such as reduced duplication and overlap, improved data quality, and ease of aggregation and synthesis. This guest blog by George Argyrous (ANZSOG) outlines 9 actions that can be taken to support the implementation of high-level monitoring and evaluation frameworks, and make sure these frameworks don't languish on a dusty shelf.
This blog is an abridged version of the brief Innovations in evaluation: How to choose, develop and support them, written by Patricia Rogers and Alice Macfarlan. It builds on a webinar delivered by Patricia Rogers in May 2018 as a joint project of UNICEF, BetterEvaluation and EVALSDGs. This blog opens up some of the issues and questions about why and how to adopt innovations in evaluation, while the brief goes into further detail about innovations that can be useful in addressing long standing challenges in evaluation.
In this blog, I wanted to share three examples of communication plan templates that address this and allow for more detail and thinking through of the communication and dissemination process. I think that each of the templates has merit in their own ways, but I’d love to hear your thoughts on whether you think they’re useful, and what processes or discussions you have had about communicating evaluation findings on projects you’ve worked on. What level of effort or thought about communication do you typically put in? Are there any barriers to communicating evaluation results that you’ve come across? What’s worked and what hasn’t?
This guest blog by Marlène Läubli Loud aims to start a discussion about what advisory group practices work well in what situations. Marlène looks back on her experiences and outlines some of the conditions that she believes have contributed to securing the “best value” from advisory groups, and asks for other ideas and examples for engaging and utilising advisory groups to their full advantage.
In this blog post, Jessica Noske-Turner introduces a newly launched section of the BetterEvaluation website - the Evaluating C4D Resource Hub - and discusses how and why this new area was developed.
On April 16 over a thousand communication for development (C4D) researchers and practitioners descend on Indonesia for the Social and Behaviour Change Summit (SBCC). Among them will be members of the Evaluating C4D research team: Professor Jo Tacchi (Loughborough University), Dr Jessica Noske-Turner (University of Leicester), Dr Linje Manyozo (RMIT University), and Rafael Obregon and Ketan Chitnis (UNICEF C4D).