You'll find hundreds of evaluation resources on the BetterEvaluation site. Some have come from recommendations by stewards. Some have come from our writeshop project or design clinics. And there are great resources that have been recommended by BetterEvaluation users. This week we are highlighting some of these user-recommended resources, how you can find the latest new material, and how you tell us your recommendations.
Patricia Rogers's blog
This week BetterEvaluation is at the Australasian Evaluation Society conference in Brisbane, Australia, where the theme is "Evaluation shaping a better future: Priorities, pragmatics, priorities and power".
How do you balance the different dimensions of an evaluation?
Is a new school improvement program a success if it does a better job of teaching mathematics but a worse job of language? Is it a success if it works better for most students but leads to a higher rate of school drop out? What if the drop out rate has increased for the most disadvantaged? And what about the costs of the program? Is it a success if the program gets better results but costs more?
What’s one of the most common mistakes in planning an evaluation? Going straight to deciding data collection methods. Before you choose data collection methods, you need a good understanding of why the evaluation is being done. We refer to this as framing the evaluation.
Evaluation journals play an important role in documenting, developing, and sharing theory and practice.
In this week's post, we've highlighted evaluation journals that would be useful to add to your regular reading, or to refer to for specific searches.
How do we ensure our evaluations are conducted ethically? Where do we go for advice and guidance, especially when we don't have a formal process for ethical review?
Many organisations are having to find ways of doing more for less – including doing evaluation with fewer resources. This can mean little money (or no money) to engage external expertise and a need to rely on resources internal to an organisation – specifically people who might also have less time to devote to evaluation.
We’re delighted to be participating in this week’s conference - Impact, Innovation and Learning: Towards a Research and Practice Agenda for the Future - being held in conjunction with the launch of the Centre for Development Impact (CDI), a partnership between the Institute of Development Studies and ITAD.
Many evaluations use a theory of change approach, which identifies how activities are understood to contribute to a series of outcomes and impacts. These can help guide data collection, analysis and reporting. But what if the theory of change is has gaps, leaves out important things – or is just plain wrong?
The term "rubric" is often used in education to refer to a systematic way of setting out the expectations for students in terms of what would constitute poor, good and excellent performance.