This week we are focusing on mixed methods in evaluation. We'll have two further blogs on the subject, one exploring an evaluation that used mixed methods and the other asking whether we are clear enough about what mixed methods really means - there are many evaluations out there claiming to be mixed methods when all they do is supplement a qualitative survey with interview data.
This week's 52 Weeks of BetterEvaluation post brings our series on the BetterEvaluation Rainbow Framework to an end, and presents the final AEA hosted webinar recording. Over the series we've introduced the seven clusters of evaluation tasks and many of the options available. You can find a list of all eight posts in the series below.
How do you balance the different dimensions of an evaluation?
Is a new school improvement program a success if it does a better job of teaching mathematics but a worse job of language? Is it a success if it works better for most students but leads to a higher rate of school drop out? What if the drop out rate has increased for the most disadvantaged? And what about the costs of the program? Is it a success if the program gets better results but costs more?
BetterEvaluation recently published a paper which presented some the confusion which can result when commissioners and evaluators don’t spend enough time establishing basic principles and understanding before beginning the evaluation. This blog, from Mathias Kjaer of Social Impact (SI), uses a recent evaluation experience in Philippines to present some tips on how to choose the right questions to frame an evaluation.
I’m sure most of our readers will agree that the goal of evaluation is not the fulfillment of a contract to undertake a study but the improvement in social and environmental conditions: evaluators really do want to see their evaluations used for positive, productive purposes. In these days of information overload it is not enough, then, to expect that a published evaluation report will be a sufficient strategy to inform or influence these improvements.
So what can be done to move from a situation where evaluation reports sit on shelves gathering dust – or worse; they are misused – how can we move from this to a situation where evaluations contribute to “social betterment”?
If you are doing any kind of outcome or impact evaluation, you need to know something about whether the changes observed (or prevented) had anything to do with the program or policy being evaluated. After all, the word “outcome” implies something that “comes out of” the program – right?
BetterEvaluation recently published a new paper, ‘Two sides of the evaluation coin,’ exploring what can happen when miscommunication, changing leadership and misunderstanding disrupt the smooth running of an evaluation: and what can be done to minimise these risks. Authors from both the evaluator and commissioner side wrote the report jointly. John Rowley, who was part of the evaluation team, has blogged on the paper, saying that ‘it deals with issues that profoundly affect program evaluations but which are almost never shared in an open and public way.’ His fellow-evaluator, Pete Cranston, has also blogged about what the experience taught him about the role of evaluation in learning, and the role of failure. Now their co-author Penelope Beynon, who was a commissioner for the evaluation, shares her side of the story, and argues for the importance of recognising the emotions involved in a bumpy evaluation ride:
How many methods do you usually see in evaluation reports as having been used to collect data? Chances are you’ll see project document review, key information interviews, surveys of some kind, and perhaps group interviews with intended beneficiaries. These methods are all useful to help describe what has happened, the outcomes and the context in which change occurred.
It’s a scenario many evaluators dread: the time has come to present your results to the commissioner, and you’ve got bad news. Failing to strike the right balance between forthrightness and diplomacy can mean you either don’t get your message across, or alienate your audience.
While we work on the remaining blog posts on the recent AEA Coffee Break webinars, this week we're highlighting content and events recently suggested to us by users.
Huge thanks to all of our users who have been pointing out great resources and useful events, keep them coming! Here are the most recent suggestions: