52 weeks of BetterEvaluation: Week 31: A series on mixed methods in evaluation

Simon Hearn's picture 27th July 2013 by Simon Hearn

This week we are focusing on mixed methods in evaluation. We'll have two further blogs on the subject, one exploring an evaluation that used mixed methods and the other asking whether we are clear enough about what mixed methods really means - there are many evaluations out there claiming to be mixed methods when all they do is supplement a qualitative survey with interview data. 

52 weeks of BetterEvaluation: Week 30: Manage an evaluation or evaluation system

kbruce's picture 26th July 2013 by kbruce

This week's 52 Weeks of BetterEvaluation post brings our series on the BetterEvaluation Rainbow Framework to an end, and presents the final AEA hosted webinar recording. Over the series we've introduced the seven clusters of evaluation tasks and many of the options available. You can find a list of all eight posts in the series below.

52 weeks of BetterEvaluation: Week 29: Weighing the data for an overall evaluative judgement

Patricia Rogers's picture 19th July 2013 by Patricia Rogers

How do you balance the different dimensions of an evaluation?

Is a new school improvement program a success if it does a better job of teaching mathematics but a worse job of language?  Is it a success if it works better for most students but leads to a higher rate of school drop out?  What if the drop out rate has increased for the most disadvantaged?  And what about the costs of the program?  Is it a success if the program gets better results but costs more?

52 weeks of BetterEvaluation: Week 28: Framing an evaluation: the importance of asking the right questions

Mathias Kjaer's picture 9th July 2013 by Mathias Kjaer

BetterEvaluation recently published a paper which presented some the confusion which can result when commissioners and evaluators don’t spend enough time establishing basic principles and understanding before beginning the evaluation. This blog, from Mathias Kjaer of Social Impact (SI), uses a recent evaluation experience in Philippines to present some tips on how to choose the right questions to frame an evaluation.

52 weeks of BetterEvaluation: Week 27: How can evaluation make a difference?

Simon Hearn's picture 5th July 2013 by Simon Hearn

I’m sure most of our readers will agree that the goal of evaluation is not the fulfillment of a contract to undertake a study but the improvement in social and environmental conditions: evaluators really do want to see their evaluations used for positive, productive purposes. In these days of information overload it is not enough, then, to expect that a published evaluation report will be a sufficient strategy to inform or influence these improvements.

So what can be done to move from a situation where evaluation reports sit on shelves gathering dust – or worse; they are misused – how can we move from this to a situation where evaluations contribute to “social betterment”?

52 weeks of BetterEvaluation: Week 25: Evaluators have feelings too: Two sides of the evaluation coin

Penelope's picture 27th June 2013 by Penelope

BetterEvaluation recently published a new paper, ‘Two sides of the evaluation coin,’ exploring what can happen when miscommunication, changing leadership and misunderstanding disrupt the smooth running of an evaluation: and what can be done to minimise these risks. Authors from both the evaluator and commissioner side wrote the report jointly. John Rowley, who was part of the evaluation team, has blogged on the paper, saying that ‘it deals with issues that profoundly affect program evaluations but which are almost never shared in an open and public way.’ His fellow-evaluator, Pete Cranston, has also blogged about what the experience taught him about the role of evaluation in learning, and the role of failure. Now their co-author Penelope Beynon, who was a commissioner for the evaluation, shares her side of the story, and argues for the importance of recognising the emotions involved in a bumpy evaluation ride:

52 weeks of BetterEvaluation: Week 24: Choosing methods to describe activities, results and context

IGuijt's picture 17th June 2013 by IGuijt

How many methods do you usually see in evaluation reports as having been used to collect data? Chances are you’ll see project document review, key information interviews, surveys of some kind, and perhaps group interviews with intended beneficiaries. These methods are all useful to help describe what has happened, the outcomes and the context in which change occurred.

52 weeks of BetterEvaluation: Week 23: Tips for delivering negative results

Jessica.SinclairTaylor's picture 14th June 2013 by Jessica.SinclairTaylor

It’s a scenario many evaluators dread: the time has come to present your results to the commissioner, and you’ve got bad news. Failing to strike the right balance between forthrightness and diplomacy can mean you either don’t get your message across, or alienate your audience.

52 weeks of BetterEvaluation: Week 22: The latest resources and events suggested by users

Nick Herft's picture 6th June 2013 by Nick Herft

While we work on the remaining blog posts on the recent AEA Coffee Break webinars, this week we're highlighting content and events recently suggested to us by users.

Huge thanks to all of our users who have been pointing out great resources and useful events, keep them coming! Here are the most recent suggestions: