Whether you are commissioning an evaluation, designing one or implementing one, having - and sharing - a very clear understanding of what is being evaluated is paramount. For complicated or complex interventions this isn't always as straight forward as it sounds, which is why BetterEvaluation offers specific guidance on options for doing this.
How do we ensure we address all the important aspects of an evaluation when we’re planning it? How do we manage to consider the different options without being overwhelmed?
Evaluation journals play an important role in documenting, developing, and sharing theory and practice.
In this week's post, we've highlighted evaluation journals that would be useful to add to your regular reading, or to refer to for specific searches.
Data analysis is sometimes the weak link in an evaluation plan. Answering key evaluation questions requires thoughtful analysis - and this needs appropriate tools.
Global voluntary networks are complex beasts with dynamic and unpredictable actions and interactions. How can we evaluate the results of a network like this? Whose results are we even talking about? This was the challenge facing BioNET when they came to the end of their five year programme and is the subject of the second paper in the BetterEvaluation writeshop series, which we want to introduce in this weeks' blog.
One of the most effective ways of learning about the evaluation field is to attend a conference, present your work and interact with other professionals.
This is why, this week, we are providing you with a round up of evaluation conferences taking place this year. We already talked about the Evaluation Conclave which took place in Kathmandu in February but there are many more taking place across the world.
How do we ensure our evaluations are conducted ethically? Where do we go for advice and guidance, especially when we don't have a formal process for ethical review?
Many organisations are having to find ways of doing more for less – including doing evaluation with fewer resources. This can mean little money (or no money) to engage external expertise and a need to rely on resources internal to an organisation – specifically people who might also have less time to devote to evaluation.
We’re delighted to be participating in this week’s conference - Impact, Innovation and Learning: Towards a Research and Practice Agenda for the Future - being held in conjunction with the launch of the Centre for Development Impact (CDI), a partnership between the Institute of Development Studies and ITAD.
Many evaluations use a theory of change approach, which identifies how activities are understood to contribute to a series of outcomes and impacts. These can help guide data collection, analysis and reporting. But what if the theory of change is has gaps, leaves out important things – or is just plain wrong?