Qualitative impact protocol
The Qualitative Impact Protocol (QuIP) is an impact evaluation approach that collects and documents narrative causal statements directly from those affected by an intervention.
Overview
Where possible, data is collected through ‘double blindfolded’ interviews, meaning neither the interviewer nor the interviewee knows the details of the specific intervention being evaluated. The narrative data this produces can provide rich insights into intervention-related and contextual factors affecting a wide range of intended and unintended outcomes. This is particularly useful for understanding drivers of change in complex situations and variations in how different stakeholders perceive these.
A second distinctive feature of QuIP is its approach to coding narrative data. In contrast to generic thematic coding of concepts, the QuIP directly codes causal claims linking causes/drivers and effects/outcomes. This facilitates the visualisation of findings through causal maps and analysis of all the causal pathways in the data, including evidence of how an intervention contributes to intended and unintended outcomes alongside other contributory factors.
QuIP can be used as a standalone approach for an evaluation or to provide evidence for use alongside other approaches and methods, including RCTs and process tracing. Evaluations using QuIP can also provide an independent reality check of a predetermined theory of change, helping stakeholders to assess and learn from the social impact of their work and demonstrate this to others. The QuIP places the perceptions of often marginalised stakeholders of an intervention at the centre of the evaluation, enabling them to share and give feedback on their experiences in an open, credible, and respectful way.
Key processes in QuIP
- Jointly determine scope: The scope of a study is jointly determined by an evaluator and a commissioner; the shared purpose is to provide a useful ‘reality check’ on the commissioner’s prior understanding of the impact of a specified activity or set of activities.
- Collect data: A useful benchmark (that emerged through the design and testing phase) is that a ‘single QuIP’ comprises 24 semi-structured interviews and four focus groups. This is typically sufficient to understand the main causal pathways being experienced by a group of respondents who share broadly similar characteristics and received a broadly similar intervention. Specific studies may be based on multiples or variants of this, depending on the heterogeneity of the sample.
- Select sources using purposeful sampling: Interviewees are selected purposefully from a known population of stakeholders, drawing on available data about variation in their exposure to the intervention, evidence of changes they are experiencing, and other effects likely to affect this. Selection of the sources most likely to add to prior understanding depends partly on whether the focus is on exploring new causal pathways openly or confirming existing theories of change.
- Use double blindfolding during data collection: Where possible, initial interviews and focus groups are conducted by independent field researchers with restricted knowledge of the activity being evaluated. This means that respondents are also unaware of what intervention is being evaluated, a feature referred to as double blindfolding (not blinding, because the blindfolding is voluntary and can be removed at any time). Blindfolding helps to avoid data being narrowly framed by reference to the intervention and to reduce the possibility of confirmation bias.
- Transcribe data: Transcripts of interviews and focus groups are written up in pre-formatted spreadsheets to facilitate coding and thematic analysis. This also entails translating the narrative into another language if needed. Wherever respondents agree, interviews and focus group discussions are also recorded to facilitate production and quality checks on transcribed narratives.
- Analyse data: An analyst (not one of the field researchers) codes the data using causal qualitative data analysis. This can be approached in an exploratory and/or confirmatory way. Exploratory coding identifies all stated causal links (i.e., pairs of drivers and outcomes). Confirmatory coding also classifies causal claims according to whether they explicitly link outcomes to specified activities and do so in ways that are implicitly consistent with the commissioners’ theory of change or are incidental to it.
- Prepare summary tables and visualizations: This can be done in various ways – customised tools can produce a semi-automated generation of summary tables and visualisations (such as Causal Map), speeding up the interpretation of the evidence.
- Ensure transparency of summaries: Software should permit easy reference back from coded evidence of causal pathways to the raw narrative data in the transcripts to facilitate quality assurance, auditing, peer review, deeper learning, and the extraction of important or illustrative text from the transcripts.
- Provide and review summary reporting: Summary reports of the evidence are a starting point for dialogue and sense-making between researchers, commissioners, and other stakeholders, thereby influencing follow-on activities.
Adapted from “Attributing development impact” Box 1.2 A brief description of the QuIP, p7.
Key characteristics / concepts of QuIP
The importance of causal narratives from intended beneficiaries
QuIP gathers evidence of an intervention’s impact through carefully choreographed interviews and systematic analysis of the causal statements in their narrative responses. It has been used particularly to interview intended beneficiaries of projects, including those who are remotely located and have limited literacy skills. Respondents are asked to talk about the main changes in their lives over a pre-defined recall period corresponding to the timing of the intervention being assessed. Questions always start by asking about outcomes and then probing backwards (‘back-chaining’) to identify what respondents perceive to be the main drivers of these changes.
Double-blinded data collection
The research team conducting interviews are independent and, where possible, ‘blindfolded’, meaning that they are not aware of who has commissioned the research or which project is being assessed. This helps to mitigate and reduce pro-project and confirmation bias and encourages a more open and holistic discussion with respondents about outcomes in different areas of their lives and drivers of these changes. Some QuIP studies combine interviews with focus groups if there is a case for reflecting the views of a certain group; these can be run using the same approach as the individual interviews (open-ended questions eliciting causal pathways) or as a traditional focus group. QuIP can be used to enquire into changes experienced by individuals, households, communities and organisations.
Purposeful sampling
The QuIP approach to sampling is to select cases - and sources of information about those cases – through rigorous purposive sampling rather than seeking a statistically representative sample. The approach is ‘Bayesian’ in its emphasis on selecting sources most likely to enrich prior understanding. Where a lot of information is available about stakeholder groups, the selection of those to interview is informed by what is already known about variation in their exposure to the intervention, as well as any evidence of changes in outcome indicators. Confirmatory sampling is also informed by theory. For example, the selection is stratified to include cases observed to be doing better or worse than expected or that are anomalous in other ways. Where little information about potential respondents is available, then sample selection aims to explore as much diversity of context and stakeholder experience as possible with the resources available, even if complete ‘saturation’ cannot be assured. These principles reflect the emphasis in QuIP on gaining a holistic understanding of ‘what works well for whom and why’ rather than estimating the precise average impact or treatment effect of one variable on another.
Open-ended data collection
A QuIP questionnaire includes a sequence of open-ended questions covering different possible outcome domains, with supplementary probing questions, as well as closed questions to confirm the respondent’s perception of overall change in that outcome domain over the specified period. Typical interviews with individuals cover domains such as sources of income, food consumption, health, intrahousehold relationships, and well-being. Normally, these reflect expected areas of change in respondents’ lives according to a project’s theory of change, but the interview is framed around outcome domains rather than specific activities or inputs in order to collect broader information about what has changed and to capture unintended and unexpected outcomes.
Thematic analysis of causal narratives
Once the data have been collected, QuIP uses a robust thematic coding method, systematically coding for drivers, outcomes and attribution. An inherent challenge of qualitative research is how to manage the large quantity of data collected; QuIP addresses this by focusing on only coding stories of change, speeding up the process by using a systematic and replicable approach to coding. The results allow analysis of key stories of change and identification of trends and patterns between different types of respondents.
Reporting of narratives using visualizations and extracts
Findings are presented in interactive dashboards and summary reports that rely on the generation of causal maps and the selection of text extracts to illustrate and enrich understanding of the most salient causal processes thereby identified. QuIP produces comprehensive and auditable data on causal processes of change but does not set out to quantify impact, so it does not provide average treatment effects or statistically representative frequency counts. Instead, it offers rich, detailed stories of change in a digestible way, allowing individual respondents to have a voice in the results.
Causal Pathway Elements
Features consistent with a causal pathways perspective
A causal pathways perspective on evaluation focuses on understanding how, why, and under what conditions change happens or has happened. It is used to understand the interconnected chains of causal links which lead to a range of outcomes and impacts. These causal pathways are likely to involve multiple actors, contributing factors, events and actions, not only the activities associated with the program, project or policy being evaluated or its stated objectives.
QuIP pays attention to the following features of a causal pathways perspective:
- Valuing actors’ narratives: QuIP purposefully collects narratives from all stakeholders and particularly intended beneficiaries about what has changed and what they perceive to be the main drivers of changes.
- Addressing power and inclusion: QuIP can help collect data from a wider range of stakeholders, including those who are harder to reach and often marginalised in evaluation activity. Findings can also be shared with them to facilitate joint learning and action.
- Articulating explicit causal pathways: QuIP-informed causal maps identify the steps in causal pathways and multiple paths to change. It offers an empirical approach to constructing or reviewing theories of change, relating both to specific projects and to an organisation’s wider strategy
- Paying attention to a range of outcomes and impacts: QuIP interviews start with a set of broad outcome domains to accommodate holistic reflections on causal drivers and outcomes - whether intended or unintended, positive or negative.
- Understanding contextual variation: The causal maps generated by QuIP can be filtered and presented for different sub-groups of those interviewed, or even individuals, to illustrate how outcomes are affected by respondents’ particular circumstances.
- Using an iterative, bricolage approach to evaluation design: Several rounds of time-limited QuIP studies may be appropriate, and/or they can contribute to the iterative design of other evaluation activities.
- Drawing on a range of causal inference strategies: QuIP involves asking respondents to explain the reasons behind different outcomes. This implicitly invites respondents to imagine what might have happened without the intervention, thereby uncovering the ‘what if’ scenarios or latent counterfactuals. This approach, known as ‘self-reported attribution’, generates data without needing a comparison or control group. This contrasts with evaluation designs based on statistically inferred attribution across multiple cases.
- Taking a complexity-appropriate approach to evaluation quality and rigour: The credibility of a QuIP study unavoidably depends on how well source interviews and focus groups are conducted and the rationale underpinning their selection. Done well, the open-ended and blindfolded approach to questioning by working back from outcomes can generate fuller and more nuanced causal information than more heavily scaffolded interviews. QuIP coding and analysis contributes to credibility and rigour by being systematic and transparent, enabling others to review the process of building causal maps from the underlying data.
Background
History of QuIP
QuIP was developed by researchers at the University of Bath’s Centre for Development Studies (CDS). After developing the method through a three-year DFID/ESRC-funded research project, staff from CDS set up Bath Social Development Research (Bath SDR Ltd) as a social enterprise dedicated to contributing to innovation in evaluation, particularly through further development and dissemination of QuIP. Since then, Bath SDR has used QuIP in over 80 studies across more than 20 countries (Feb 2024). Projects evaluated have ranged widely: from rural livelihoods interventions at the household level to capacity building of community organisations working in the field of sexual and reproductive health rights.
Methods that are part of QuIP
BetterEvaluation defines an approach as a systematic package of methods. The Rainbow Framework organises methods in terms of more than 30 tasks involved in planning, managing and conducting an evaluation. Some of the methods used in QuIP and the evaluation tasks they relate to are:
- Decide who will conduct the evaluation
- External consultants: QuIP studies often engage External consultants to mobilise skilled interviewers with the required language and fieldwork skills.
- Determine what constitutes high quality evaluation
- Bias reduction by engaging an independent, blindfolded team to conduct interviews where possible and appropriate. The interviewers and respondents are not aware who has commissioned the research or which project is being assessed. This helps to mitigate and reduce pro-project and confirmation bias, as well as enable a broader and more open discussion with respondents about all outcomes and drivers of change.
- Transparency: QuIP findings can be traced back to the coded interview and focus group data to provide transparent and traceable links between analysis and evidence.
- Triangulation: QuIP uses triangulation of evidence to validate data through cross-verification from more than two sources.
- Evaluate the evaluation
- Sensemaking/verification workshops: QuIP evaluations may include sense-making or verification workshops involving the evaluation team, project staff, respondents and other stakeholders. In addition workshops encourage sharing and discussion of the findings that support use.
- Develop the evaluation design
- Using joint evaluation design, the evaluation team and the commissioning agency jointly develop the QuIP study design, including agreeing on sampling size, selection strategy, mix of data collection tools, and specification of outcome domains through which to design interview and focus group schedules.
- Develop theory of change/programme theory
- Articulate mental models: Through open-ended questionnaires and interviews, the evaluation team elicits the mental models of participants in the program.
- Causal Mapping is used to represent mental models showing how drivers and outcomes are understood to be connected.
- Sample
- Purposeful sampling - The QuIP approach to sampling is to select cases through rigorous purposeful sampling rather than seeking a large representative sample. Where good monitoring data are available it can be used to decide the number, location and variation of respondents selected, based on differences in context, geography, treatment and/or positive and negative results from existing monitoring data.
- Collect or retrieve data
- Focus groups
- Interviews
- Questionnaires using a combination of exploratory open-ended questions and closed questions
- Personal stories which provide a narrative of the participants experiences.
- Analyse data
- Data are analysed by coding causal maps that are focused on causal questions about beneficiaries’ perceptions of what changes they have experienced, and what they think contributed to these.
- Understand causes
- QuIP uses key informant attribution, that focuses on eliciting beneficiaries’ perceptions of what changes have been produced and what has contributed to these.
- This can be triangulated by using similar interviews with other stakeholders in the system to understand different perceptions about the actual (not intended) causal pathways relating to the key domains. Data from the QuIP can contribute to further data collection and analysis using process tracing
- Develop reporting media
- The authors of QuIP communicate the findings from causal interviews using visualisations in the form of causal maps. These can be developed and shared using an interactive dashboard in the bespoke software Causal Map, which is designed to allow transparent peer review of qualitative coding and encourage interaction with the causal pathways represented by allowing users to create and explore their own causal maps using different filters.
- QuIP can report extracts from or whole case studies based on some of the personal stories which have been gathered.
Examples
Tearfund have conducted three separate QuIP studies between 2016-18 to evaluate their Church and Community Mobilisation (CCM) project in Uganda, Sierra Leone and Bolivia. CCM is based on a theory of development centered on self-empowerment and community-based social improvement, fostered through theological resources and religious spaces. CCM is not a programme with clearly defined physical deliverables or time frames. Rather, through the utilization of bible studies, discussion tools, and group activities, it seeks to ‘awaken’ local church leaders, congregations, and poor rural communities and encourage them to collaborate in realizing their own development.
The aim of the evaluations was to provide deeper insights into the program to promote internal learning and improvement and to share what was learnt with partners and community participants. This is a project where there is no real baseline, beneficiaries are fluid and impacts cannot easily be measured quantitatively. The QuIP offered an opportunity to understand more about how people’s lives, beliefs and attitudes had changed, and what had influenced any changes.
In individual and focus group interviews in Uganda respondents reported positive changes, such as increased empowerment and improved community relationships, as well as negative changes, such as decreased material assets and reduced productivity. Analysis of the data provided a broad picture of drivers of change, including the impact of the CCM project amongst others. Over half the respondents cited CCM, unprompted, as a driver of positive change in their lives. Tearfund then validated the analysis with a wider group of respondents to understand what this meant for future community work. This helped to close the feedback loop and engage respondents with the data collected through interviews.
You can read more in-depth case studies in the QuIP book Attributing Development Impact: QuIP Casebook (2019 - available for free online) - which also contains detailed theoretical background and practical guidelines.
Advice for choosing QuIP
Reasons for choosing QuIP
The QuIP casebook identified three reasons for choosing QuIP based on ten case studies:
- Congruence or fit with core values, for example, QuIP’s emphasis on self-reported attribution – placing high emphasis on intended beneficiaries’ own perceptions.
- QuIP has both exploratory and confirmatory potential. It can pick up on unexpected and expected but hard-to-measure outcomes and yield evidence of the causal mechanisms behind them. The exploratory dimension's importance included seeing QuIP as a way of assessing the possible risks associated with an intervention as well as collecting evidence of positive social impact.
- As a more cost-effective and flexible alternative to quantitative impact assessment.
What types of projects and programs would QuIP be appropriate for?
QuIP was developed to evaluate the impact of social and development programs to address questions of causal attribution in complex environments where many factors may interact to contribute to program outcomes, and there may be many different outcomes and causal pathways. It has been used in a variety of program contexts, including rural livelihoods, microfinance and savings, labour conditions, organisational capacity development, education, health, nutrition, water & sanitation, sexual & reproductive health, and community mobilization.
What types of evaluations is QuIP appropriate for?
QuIP is particularly useful for evaluations which are seeking to understand the range of effects and how these have been achieved, including the contribution of other factors and the causal pathways by which they have been achieved. Given that there are no direct questions about the intervention, it is not suitable in contexts where respondents may not be able to relay a causal pathway to change, either because the expected outcome is relatively marginal for them or because they don’t have a clear understanding of why change has happened. If specific feedback on an intervention is required, then a different approach should be used, although it can be combined with QuIP interviews (e.g. process-focused focus groups combined with outcome-focused QuIP individual interviews).
What level and type of resources are required for QuIP?
The design of a QuIP evaluation starts with a theory of change. If the program doesn’t have an explicit theory of change, implicit understandings of how change happens can be developed with the commissioning agency into a theory of change.
QuIP requires the ability to engage local field researchers who do not know which program is being evaluated and who speak the local language.
Data collection generally occurs over a two-week period (for 24 interviews and 4 focus groups), the full QuIP process generally takes around three months from start to finish.
How might QuIP be part of an effective overall evaluation design/combined with other approaches and methods?
The paper "From narrative text to causal maps: QuIP analysis and visualization" discusses how a QuIP analysis relates to other forms of enquiry and suggests six ways that data from a QuIP study can complement evidence from other approaches.
- "Pilot studies. QuIP-generated evidence can help to clarify concepts, select factors (variables) and prioritise the causal pathways to be investigated subsequently in greater depth or on a larger scale, including through use of surveys.
- Theory-led process tracing. A QuIP study can be one useful component of process tracing and contribution analysis that aims to identify packages of necessary and sufficient conditions for the achievement of specified outcomes. Such research entails coming up with a range of possible theoretical explanations or mechanisms for a specified outcome, followed by empirical tests to help decide which explanations are most likely to apply to different situations. QuIP studies can generate this kind of empirical evidence. Citation counts can also inform the process of Bayesian updating: raising or lowering confidence in prior explanations of what is happening.
- Mixed methods impact evaluation. The classic design comprises open-ended interviews and focus groups (the qualitative ‘small n’ component) alongside a large-scale survey (the quantitative ‘large n’ component) which could be a randomized controlled trial, for example. The quantitative element aims to generate precise and valid estimates of the average or typical statistical association between key input variables or ‘treatments’ (X) and key outcomes (Y) across a defined population. If well designed, causality can also be inferred from these associations. The qualitative component, which could be one or more QuIP studies, helps to illuminate the possible causal mechanisms driving the observed changes, and contributes to understanding variation in the impact across the population.
- Process evaluation. These typically combine a thorough review of documentation about a specific project with key-informant interviews to identify and explain the reason for progress (or lack of) in implementing a project as planned. They are conducted by one or a team of subject specialists, often over a relatively short period of time. Process evaluations often struggle to collect and analyse meaningful feedback from clients, end-users and intended beneficiaries. This gap can be filled by including a QuIP study as one component of the process evaluation.
- In-depth follow-up studies. In addition to conducting a QuIP alongside other studies, an additional possibility is to utilise it as a way of following up on a particular question or issue generated by a previous study. This might arise, for example, where interpretation of a large-scale survey is proving difficult or contentious; or it might help to understand reasons for variation in the experience of different participants in a project. One advantage of this approach is that the information generated by an earlier baseline or repeat survey provides a strong foundation for purposeful selection of respondents for the QuIP. This fits well with the tradition of ‘realist evaluation’, the role of the QuIP being to assist in identifying ‘context, mechanism, outcome’ configurations experienced by different respondents.
- Participatory sensemaking. The evidence generated by QuIP studies is generally primarily intended for sharing with managers and commissioners of projects, programmes and organisations, particularly where there are large gaps (geographical and cultural) along the financing chain. These can be linked to what some economists call ‘information asymmetries between principals and agents’. However, the interpretation and use of QuIP evidence need not feed only ‘upwards’ along the financing chain. Appropriately visualised there is a lot of scope for sharing them with other stakeholders too, including feeding back to those interviewed. An alternative to this is participatory causal mapping. This brings together stakeholders to agree on a map and can contribute to promoting collaboration and building a common understanding of a system or issue. QuIP, in contrast, permits more detailed analysis of how cognitive causal maps vary between stakeholders." (BSDR, 2021)
Advice for using QuIP effectively
The QuIP casebook suggests working with local, established academic institutions or consultancies to manage and recruit field researchers with the personal backgrounds, language skills, and enthusiasm to facilitate building a strong rapport with respondents.
QuIP studies are implemented by a lead evaluator who consults closely with the commissioner. They, in turn, identify an independent researcher able to mobilise skilled interviewers with the required language and fieldwork skills. Where possible, the interviewers are not informed which program is being evaluated. Interviewers organise and conduct the data collection (interviews and focus groups) to ensure a completely independent approach to interviews. Close consultation over ethics, including how researchers gain access to respondents and introduce themselves, is critically important, given that they generally do not do so through the agency implementing the activity being evaluated. Likewise, quality depends on careful piloting of the data collection instruments, particularly to address cultural and language translation issues.
An important responsibility of the commissioning agency is to furnish lists of stakeholders affected by an intervention from which people will be selected for interview.
Where possible, the team conducting interviews is independent and blindfolded where appropriate; they and the respondents are not aware who has commissioned the research or which project is being assessed. This helps to mitigate and reduce pro-project and confirmation bias and enable a broader and more open discussion with respondents about all outcomes and drivers of change.
Inclusion
It is important to identify and include diverse perspectives and different types of diversity (linked to purposeful sampling).
Challenges and potential pitfalls
it can be challenging for the field-based evaluation team to recruit participants in interviews and questionnaires when they are not formally aligned with a particular program.
Resources
Guides
- Getting started & FAQs
A variety of short introductory guides available from Bath SDR’s Resources.
Discussion Papers
Examples
- Assessing rural transformations: piloting a qualitative impact protocol in Malawi and Ethiopia (archive link)
This working paper by James Copestake and Fiona Remnant reports on findings from four pilot studies of a protocol for qualitative impact evaluation of NGO sponsored rural development projects in Malawi and Ethiopia
Websites
Original Authors: Fiona Remnant, James Copestake and Rebekah Avard
Updated March 2024 by: Fiona Remnant, James Copestake, Patricia Rogers and Kaye Stevens
Sources
The feature image for this page was created by Chris Lysy at FreshSpectrum.com. You can subscribe to Chris Lysy's cartoons via Patreon.
Expand to view all resources related to 'Qualitative impact protocol'
Resource
- Assessing rural transformations: Piloting a qualitative impact protocol in Malawi and Ethiopia
- Attributing development impact: The qualitative impact protocol (QuIP) case book
- Bath social & developmental research ltd. (BSDR) website
- Case study: QuIP & RCT to evaluate a cash transfer and gender training programme in Malawi
- Case study: Using QuIP to evaluate Tearfund’s church and community transformation programme
- Causal Pathways 2023 Symposium and 2024 introductory sessions
- Causal Pathways introductory session: Qualitative Impact Protocol (QuIP)
- Comparing QuIP with thirty other approaches to impact evaluation
- Cracking causality in complex policy contexts
- Does our theory match your theory? Theories of change and causal maps in Ghana
- From narrative text to causal maps: QuIP analysis and visualisation
- Lost causal: Debunking myths about causal analysis in philanthropy
- QuIP and the Yin/Yang of Quant and Qual: How to navigate QuIP visualisations
- QuIP in action: Save the Children case study
- QuIP used as part of an evaluation of the impact of the UK Government Tampon Tax Fund (TTF)
- QuIP: Understanding clients through in-depth interviews
- Qualitative impact protocol (QuIP)
'Qualitative impact protocol' is referenced in:
Approach
Blog
Method
Theme