This resource and the following information was contributed to BetterEvaluation by Florian Schatz.
Authors and their affiliation
Florian Schatz and Katharina Welle, Itad
Year of publication
Type of resource
This CDI Practice Paper explores the benefits and pitfalls of applying Qualitative Comparative Analysis (QCA) in an impact evaluation setting based on three applications of the method under Itad-led evaluations and research:
- A study on how Information and Communication Technology (ICT)-based reporting can improve water supply sustainability (Welle, Williams, Pearce, & Befani, 2015): The aim of this study was to carry out a systematic analysis of the factors affecting the success of different ICT initiatives in rending water services sustainable.
- An evaluation of the Medicines Transparency Alliance (Stedman-Bryce, 2015): MeTA was established in 2008 in 7 pilot countries with the objective of improving evidence-based policymaking in the medicines sector for better access to essential medicines. The initiative’s intervention logic was centred on improving transparency and accountability, and the evaluation focussed on testing the validity of this approach.
- A macro evaluation of the UK Department of International Development’s portfolio in the area of social accountability (Holland, n.d.): DFID has a large and diverse portfolio of projects in the area of social accountability, and the aim of the macro evaluation was to synthesise learning from this portfolio and generate evidence on what works, for whom, in what contexts and why.
Who is this resource useful for?
- Commissioners/managers of evaluation;
How have you used or intend on using this resource?
This resource helps evaluators and evaluation commissioners alike in navigating the complex challenges of using QCA in an impact evaluation. Some of QCA’s benefits and pitfalls discussed in the paper include, among others:
- QCA allows evaluators to compare a medium or large number of cases in a systematic manner, which is difficult to do with other qualitative approaches.
- By making all assumptions and choices explicit, QCA enforces a very systematic and transparent approach.
- Unlike quantitative approaches, QCA can use any kind of data (including categorical data), as long as the dataset is relatively complete.
- QCA is also able to identify different complex causal patterns rather than simplistic answers, which is in line with the type of causality often observed in the real world.
- QCA is relatively complicated, and the evaluation team needs to have a thorough understanding of the approach.
- The evaluation object needs to be appropriate for QCA: The evaluation object needs to comprise a number of comparable cases, the causality observed needs to fit with the QCA logic of multiple causal pathways, and a relatively complete dataset needs to be available.
- The evaluation questions need to be appropriate for QCA: QCA does not measure the net impact of an intervention, and does not explain the nuanced mechanism at work and how it is embedded in context. Rather, it identifies packages of conditions associated with the outcome of interest.
- Linked to the point above, QCA by itself is unlikely to meet the expectations of the evaluation client, and should ideally be complemented with other approaches to ensure both breadth and depth of analysis.
Why would you recommend it to other people?
The paper presents practical lessons illustrated by three example applications of QCA, providing accessible and useful advice for anyone thinking of commissioning or applying QCA.
Schatz, F., & Welle, K. (2016). Qualitative Comparative Analysis: A Valuable Approach to Add to the Evaluator’s ‘Toolbox’? Lessons from Recent Applications. Centre for Development Impact Practice Paper Number 13.