52 weeks of BetterEvaluation: Week 31: A series on mixed methods in evaluation

Simon Hearn


This week we are focusing on mixed methods in evaluation. We'll have two further blogs on the subject, one exploring an evaluation that used mixed methods and the other asking whether we are clear enough about what mixed methods really means - there are many evaluations out there claiming to be mixed methods when all they do is supplement a qualitative survey with interview data. 

Mixed methods are very much in the lime-light at the moment with a recent special edition of the New Direction for Evaluation journal devoted to mixed methods and a brand new professional association formed this month - the Mixed Methods International Research Association. Indeed, no evaluation toolkit would be complete without options for combining quantitative and qualitative data

To get us going on this topic I wanted to ask: what do you understand as mixed methods in evaluation and why is it so important?

The latter question is the simpler and points to one of the principles of BetterEvaluation: Many of the questions that evaluations attempt to address require, by their nature, different kinds of data, different perspectives and different methods combined in clever ways to generate the evidence needed to make the required value judgments.

I've already pointed to my opinion that mixed methods is more than the mere use of qualitative methods alongside quantitative methods and I think this view is commonly shared. In a new paper from Michael Bamberger published by Social Impact mixed methods evaluation is defined as:

"An approach to evaluation that systematically integrates QUANT and QUAL methodologies and methods at all stages of an evaluation."

You can see that Michael emphasises the fact that quantitative and qualitative methods have to be systematically integrated (not just used one after the other) at all stages of the evaluation (not just in data collection) for a design to be called mixed method.

Howard White takes a slightly different view of mixed methods and defines them in terms of mixing counterfactual analysis with factual analysis which can be quantitative or qualitative. It's not the methods in this case but the mode of analysis that is mixed. 

Other thinkers in this field go further than this and describe mixed methods as a fundamental and deliberate attempt to reconcile two very different scientific paradigms. A more recent book provides fascinating insight into the qualitative and quantitative paradigms and how they can be resolved.

But for all the methodological wrangling, what can we as evaluators or evaluation commissioners actually do?

BetterEvaluation provides advice for a number of options that can be considered once the task of combining quantitative and qualitative data has been decided. It provides information on how to collect data (in parallel or in sequence), how to combine data (component or integrated design) and different purposes of combining data (enriching, examining, explaining and triangulation).

'52 weeks of BetterEvaluation: Week 31: A series on mixed methods in evaluation' is referenced in: