Happy Holidays! In celebration of a great year, we've collected a list of favourite resources of 2016 that are freely available online so that we could share them with you as a bit of an end of year gift. The resources aren't necessarily new - but they are things that have been discovered for the first time or rediscovered in 2016. We have a wide range of cool tools, guides, examples and other resources that help us do evaluation better and think about it differently.
We had some great suggestions for this list from people in the BetterEvaluation community and we'd like to thank everyone who wrote back or tweeted to us about their favourite resource. We hope you find it as interesting as we did to see what people's picks were and why.
The resources are clustered around the Rainbow Framework - so that if you are interested in more resources in a particular area, you can click through to find them - with more general resources at the end.
Are we missing something you found enormously useful or engaging in 2016? Please leave us a note in the comments with a link and tell us why it's your pick!
Favourite Evaluation Resources of 2016
Manage an evaluation or an evaluation system
The World Clock Meeting Planner: This one seems almost too simple for inclusion, but then again, it is one of our most used tools. This tool lets you figure out the best time to talk to people from multiple time zones - and we highly recommend this if timezone arithmetic gives you as big a headache as it does us. There's also a more accessible option at The Time Now, which is WCAG 2.0 compatible.
The GeneraTOR Our interactive tool prompts for particular information to produce a draft Terms of Reference document which can be shared, reviewed and finalised with other stakeholders. Our Terms of Reference page has additional resources.Capacity development - Dennis Bours drew our attention to his favourite 2016 resource - Scott Chaplowe's book Monitoring and Evaluation Training, and accompanying Resource Page which includes over 150 resources for M&E practice and capacity development, hand-picked from the authors’ research for the book (and most hyperlinked and freely available online). Although the book is not entirely free, Chapter 1 – M&E Training that Makes a Difference and Chapter 5 – What makes a good M&E Trainer? are available for download. You can read Dennis' full review of the book here. See also our Develop Evaluation Capacity page for more resources.
Define what is to be evaluated
Online, interactive software for drawing logic models: One of our enduring blog posts discusses the nuts and bolts of drawing logic models. Since that post, two new tools Dylomo and Theory Maker were submitted to BetterEvaluation this year, and we think it's great that people are thinking about new ways to make the process of drawing logic models as easy and accessible as possible. Our page on Develop Programme Theory has links to other resources on drawing logic models
Sketch maps for articulating mental models: We're so grateful Rosemary Cairns shared her experiences in a blog on articulating mental models - she gives some great insights from her experience in the field about getting people to express how they think a program or project works. It's definitely worth a read if you haven't already. Our page on articulating mental models has information on other strategies.
Outcome Harvesting resources: Courtesy of Ricardo Wilson-Grau, the website OutcomeHarvesting.net is a source of applications, events and resources to support the development of a community of practitioners of Outcome Harvesting, which allows evaluations to identify and include unintended outcomes. The site also contains a forum, on which discussion of lessons being learned world-wide about how to adapt this alternative approach to monitoring and evaluation will be shared, challenged and advanced. You can also visit our Outcome Harvesting Approach page, written for BetterEvaluation by Ricardo.
Frame the boundaries for an evaluation
A fill-in-the-blank exercise: 'What I want to know from the [program name] evaluation is ________" We found this in Michael Quinn Patton's book Utilization-Focused Evaluation (2008, pp. 49-51). You can read the full extract on Google Books here - it's a lovely anecdote about Michael trying to engage a room full of hostile stakeholders in order to identify a set of evaluation questions and concerns. As both sides are growing increasingly frustrated, and are on the verge of calling off the evaluation, Michael asks everyone in the group to fill in the blank 10 times - and this, and the subsequent process of narrowing down everyone's questions, completely turns the evaluation around. It's definitely worth a read. Also check out page 52 (Exhibit 2.3), which lists five Criteria for Utilization-Focused Evaluation Questions. See also our Utilization-Focused Evaluation approach page.
Checklists for KEQs: The CDC Evaluation Questions Checklist was recently recommended to us by Robin Kuwahara, who wrote: "Colleagues of mine at CDC’s National Asthma Control Program created a useful checklist for assessing potential evaluation questions. The list is grounded in the evaluation literature and has benefitted from the practice wisdom of evaluators who serve in a range of capacities and agencies." The list has an emphasis on the importance of involving stakeholders in developing questions.
Another KEQ checklist that we'd recommend is Lori Wingate and Daniela Schroeter's Evaluation Checklist for Program Evaluation, which distills and explains criteria for effective evaluation questions. It is housed on the Evaluation Center at Western Michigan University's website on their Evaluation Checklist page - which I'd highly recommend if you are into checklists (and who isn't, right?).
For more advice on developing KEQs, visit Specify Key Evaluation Questions.
Describe activities, outcomes, impacts and context
Examples of using big data in evaluation In our various interactions with evaluators this year (including sessions on innovation in evaluation in Ottawa and Sydney), big data was identified as a challenge for evaluators - with very few evaluators having experience or training in using big data for evaluation. The report Integrating Big Data into the Monitoring and Evaluation of Development Programmes, by Michael Bamberger for Global Pulse, is a Call to Action to encourage and inspire development agencies and evaluators to collaborate with data scientists to find innovative ways of using big data in development, and includes examples of how big data and related ICT (information and communications technologies) are already being used in programme monitoring, evaluation and learning. You can find more resources on our Big Data option page.
Analysing data with Excel – Ann K Emery’s set of 50 two minute videos cover topics such as cleaning and tidying data, exploring data, analysing and reporting – and includes ways of using Excel to quickly analyse qualitative data such as text responses to questionnaires.
Data Visualisation checklist by Ann K Emery and Stephanie Evergreen - Updated in May 2016, this checklist should be printed out and stuck up on your wall above wherever you work so that there's no way you can miss it. For those who want more information, you can find Stephanie Evergreen's pages and recommended resources on BetterEvaluation under Visualise Data and Develop Reporting Media.
Visualising missing data - the Newfoundland and Labrador chapter of the Canadian Evaluation Society recommended an AEA365 blog post by Tony Fujs, which provides detailed instructions in how to visualise the cases with missing data - and explains why this is important. (If you don't know the AEA365 blog series, do check it out for its amazing range of insights and resources, delivered one a day all year round).
Evidence from previous evaluations - 3ie's Impact Evaluation Repository was recommended to us by Tricia Petruney via Twitter, who called it a "golden global good" - and we agree. The Repository is an index of all published impact evaluations of development interventions. All studies in the Impact Evaluation Repository have been screened to ensure they meet 3ie’s inclusion criteria and in August 2016, a massive updating exercise of the IER was completed, bringing the total number of evaluations and links to original studies to 4,260 - Quite an undertaking!
Understand Causes of outcomes and impacts
Pathways to change: Evaluating development interventions with qualitative comparative analysis (QCA) - This report by Barbara Befani contains a step-by-step guide on how to apply and ensure the quality of of QCA to real-life development evaluation, including common mistakes and challenges. To quote Rick Davies in his review on M&E News: "This is an important publication, worth spending some time with. It is a detailed guide on the use of QCA, written specially for use by evaluators. Barbara Befani has probably more experience and knowledge of the use of QCA for evaluation purposes than anyone else. This is where she has distilled all her knowledge to date. There are lots of practical examples of the use of QCA scattered throughout the book, used to support particular points about how QCA works. It is not an easy book to read but is well worth the effort because there is so much that is of value. It is the kind of book you probably will return to many times. "
Our Qualitative Comparative Analysis Page, written by Rick Davies, has more resources and guidance on QCAs.
Synthesise data from one or more evaluations
Free e-Book by Judy Oakden and Melissa Weenink - What’s on the rubric horizon: Taking stock of our current practice and thinking about what is next. This book explores some of the challenges Judy and Melissa have encountered using rubrics in their practice and includes feedback from a discussion during a practice-based session at the ANZEA Conference in Auckland, New Zealand in 2015 exploring the difficulties evaluators face with rubrics. See more on Rubrics.
Report and Support Use of findings
The Monitoring & Evaluation and Climate Change Interventions is a weekly online broadsheet of curated sector news and information on M&E, knowledge management, learning, capacity development and informed decision-making in a changing climate. It's a really handy place to hit up for the latest opinions and goings on in the area and we always find something new and interesting on there when we check in, and we also think it's worthwhile to highlight in respect to our Report and Support Use cluster - thinking about new avenues for sharing reports and findings and starting conversations about these.
A recommendation from BE Member Jess Noske-Turner: Monitoring and Evaluation for Participatory Theatre for Change guide. Participatory Theatre for Change (PTC), similar to other participatory communication, has typically been one of the ‘hard to measure’ approaches to address social and development challenges. This guide offers practical guidance and tool suggestions for implementing monitoring and evaluation in PTC programs, and highlights considerations and approaches
for process and quality monitoring of PTC. Jess writes: "This is not a breaking-new resource, but I have only just discovered it, perhaps because I assumed that it would be very specific to Participatory Theatre. In fact, I can see that this guide could be adapted and used for a range of different kinds of C4D. I think the strength is the way the theory of change is articulated - which is quite simple but specific - and the way the rest of the guide is built around that."
Tom Archibald let us know his 2016 pick via Twitter - The IIED Briefing Paper: Realising the SDGs by reflecting on the way(s) we reason, plan and act: the importance of evaluative thinking. Tom says: "This resource stood out for me because in the laudable push to evaluate the SDGs, there is a risk of approaching the task as a purely technical endeavor, consisting of the application of predetermined metrics as a sort of compliance activity. This publication compellingly makes the case that evaluation, especially in complex and adaptive contexts such as the SDGs, requires "the skills and dispositions of critical thinking," and that "all of us—evaluators, policymakers, parliamentarians, implementers and the general public—must also think evaluatively."
Lanie Stockman had two top picks for us:
The Pelican Initiative: Platform for Evidence-based Learning & Communication for Social Change (which we are also huge fans of). In particular, there were two threads that Lanie really appreciated: "[The threads on] evaluation terms of reference and randomised control trials were energetic and relevant. The discussions confirmed that: (1) if evaluation terms of reference are not sensible, there's a good chance the evaluation report won't be either! and (2) It's ok to question Randomised Control Trial evaluation designs as the 'gold standard' - a range of knowledge forms are legitimate and important. Ultimately the evaluation questions should guide method." - Both very good take away points!
Developmental Evaluation by Michael Quinn Patton. Lanie's take away from this book was: "It's all about the relationships! This book reinforced that the evaluator-programmer relationship is critical if the evaluation report is going to come off the dusty shelf and actually be used for learning and program improvement." You can read an overview of the book by Michael Quinn Patton himself here, and we'd also recommend checking out our theme page on Developmental Evaluation for some additional resources.
Thanks to everyone who helped us with this list. Tell us your top 2016 pick in the comments below!
(Editor's comment: For anyone counting, yes, there are more than 21 resources in the list - We kept coming back to add more!)