Approaches (on this site) refer to an integrated package of options (methods or processes). For example, 'Randomized Controlled Trials' (RCTs) use a combination of the options random sampling, control group and standardised indicators and measures.

Evaluation approaches have often been developed to address specific evaluation questions or challenges. For example, the Contribution Analysis approach has been developed to address questions about the feasibility of concluding that an intervention has contributed to an outcome in circumstances where a direct causal relationship is difficult to demonstrate.

If you're looking for an approach that's not listed here, it could exist as an evaluation option in the Rainbow Framework. Try using the search tool to find it. If you still can't find it, let us know via the contact form.

List of Approaches

Information is currently available for the following approaches:

Appreciative Inquiry

A strengths-based approach designed to support ongoing learning and adaptation by identifying and investigating outlier examples of good practice and ways of increasing their frequency.

Beneficiary Assessment

An approach that focuses on assessing the value of an intervention as perceived by the (intended) beneficiaries, thereby aiming to give voice to their priorities and concerns.

Case study

A research design that focuses on understanding a unit (person, site or project) in its context, which can use a combination of qualitative and quantitative data.

Causal Link Monitoring

An approach designed to support ongoing learning and adaptation, which identifies the processes required to achieve desired results, and then observes whether those processes take place, and how.

Collaborative Outcomes Reporting

An impact evaluation approach based on contribution analysis, with the addition of processes for expert review and community review of evidence and conclusions.

Contribution Analysis

An impact evaluation approach that iteratively maps available evidence against a theory of change, then identifies and addresses challenges to causal inference.

Critical System Heuristics

An approach used to surface, elaborate, and critically consider the options and implications of boundary judgments, that is, the ways in which people/groups decide what is relevant to what is being evaluated. 

Democratic Evaluation

Various ways of doing evaluation in ways that support democratic decision making, accountability and/or capacity.

Developmental Evaluation

An approach designed to support ongoing learning and adaptation, through iterative, embedded evaluation.

Empowerment Evaluation

A participatory approach designed to provide groups with the tools and knowledge so they can monitor and evaluate their own performance.

Horizontal Evaluation

An approach to learning and improvement that combines self-assessment by local participants and external review by peers​.

Innovation History

A particular type of case study used to jointly develop an agreed narrative of how an innovation was developed, including key contributors and processes, to inform future innovation efforts​.

Institutional Histories

A particular type of case study used  to create a narrative of how institutional arrangements have evolved over time and have created and contributed to more effective ways to achieve project or program goals​.

Most Significant Change

Approach primarily intended to clarify differences in values among stakeholders by collecting and collectively analysing personal accounts of change.

Outcome Harvesting
Also Available In: PortuguêsEspañol

An impact evaluation approach suitable for retrospectively identifying emergent impacts by collecting evidence of what has changed  and, then, working backwards, determining whether and how an intervention has contributed to these changes.

Outcome Mapping

An impact evaluation approach which unpacks  an initiative’s theory of change, provides a framework to collect data on immediate, basic changes that lead to longer, more transformative change, and allows for the plausible assessment of the initiative’s contribution to results via ‘boundary partners’.

Participatory Evaluation

A range of approaches that engage stakeholders (especially intended beneficiaries) in conducting the evaluation and/or making decisions about the evaluation.

Participatory Rural Appraisal (PRA) / Participatory Learning for Action (PLA)

A participatory approach which enables  farmers to analyse their own situation and develop a common perspective on natural resource management and agriculture at village level. 

Positive Deviance

A strengths-based approach to learning and improvement that involves intended evaluation users in identifying ‘outliers’ – those with exceptionally good outcomes - and understanding how they have achieved these.

Qualitative Impact Assessment Protocol (QUIP)

An impact evaluation approach without a control group that uses narrative causal statements elicited directly from intended project beneficiaries. 

Randomised Controlled Trials (RCT)

An impact evaluation approach that compares results between a randomly assigned control group and experimental group or groups to produce an estimate of the mean net impact of an intervention. 

Realist Evaluation

An approach especially to impact evaluation which examines what works for whom in what circumstances through what causal mechanisms, including changes in the reasoning and resources of participants. 

Social Return on Investment (SROI)

An participatory approach to value-for-money evaluation that identifies a broad range of social outcomes, not only the direct outcomes for the intended beneficiaries of an intervention. 

Success Case Method

An impact evaluation approach based on identifying and investigating the most successful cases and seeing if their results can justify the cost of the intervention (such as a training course).

Utilisation-Focused Evaluation

Uses the intended uses of the evaluation by its primary intended users to  guide decisions about how an evaluation should be conducted. 


Anonymous's picture

Reading the difference approaches I now clearly understand which approach is better depending on the setting I find myself in.

Vanesa May Quinones's picture
Vanesa May Quinones

I love this site, it really help me understand more my work as MEAL officer. May I ask of what approach and methodology best to use when conducting a internal end of project action review where you will be measuring the relevance, effectiveness, efficiency, and significance of the project. Thank you so much for the help.

Anonymous's picture
Manbir Bishwokarma

A quite impressive site and essential materials are available for the M&E of development projects. 

Anonymous's picture
Rosemary. Nyaga

Great resources

Anonymous's picture
Fred Silas

The list of the approaches are necessary in my job.It is necessary for a consultant like me to know these approaches.It is said that a 'man's legs must be tall enough to touch the ground'

Anonymous's picture

I love this site, it really help me understand more my work as MEAL officer. May I ask of what approach and methodology best to use when conducting a internal end of project action review where you will be measuring the relevance, effectiveness, efficiency, and significance of the project. Thank you so much for the help.

Patricia Rogers's picture
Patricia Rogers

Glad to hear you're finding the site useful.  For your internal end of project action review, as for any evaluation, I would recommend choosing the approach and methodology that suits the purpose of the evaluation, the nature of what you're evaluating, and the resources and constraints you have.  I suggest you work through the Manager's Guide to Evaluation and the GeneraTOR to ask yourself the hard questions up front about why you are doing the action review - who will use it and how?  If you're wanting to do an after action review, then your intended purpose might be to document what was done, identify what went well and not so well, and what might be done in the future to build on strengths and successes and learn from what did not work well.  Check out some good questions and group processes you could use on the After Action Review page You might well find that answering these questions, in a way that engages the right people so the answers are valid and credible to the people who will use them, is more important than measuring relevance, efficiency, effectiveness and significiance (although you might well want to consider these aspects when discussing what worked well and not so well).

Anonymous's picture
Aslam Aman

Dear Dr. Patricia, I find the Better Evaluation website highly useful and very engaging. It has helped me to improve my understanding of different aspects of evaluation, particularly evaluation approaches. For me, this is the most accessible and easy to understand source. 

I have been meaning to ask a question that has been nagging me for the past few years, but never got around to do so: In some end of project evaluation reports (e.g. the ones done for UNICEF), the evaluators claim that they used formative-summative approach for the evaluation. This claim raised two questions for me? Aren't formative and summative evaluations two types of evaluations associated with two different stages of the project life cycle: formative with the beginning and summative with the end of the project. Secondly, aren't these two different types, rather than two different approaches. Both types can, depending on the context,  use some of the evaluations approaches listed on this page. This being the case, isn't it a contradiction in terms to say "we used formative-summative approach" to imply combination of the two? 
I  personally know at least senior evaluators who, while doing end-line evaluations,  describe their approach as "formative-summative".  I don't know about others, these two evaluators justified use of the term (summative-formative) saying that they use it whenever end of project evaluations have both summative and formative elements i.e. summative because it is done at the end of the programme, and formative because it will inform future programming. I find this argument perplexing. Won't summative evaluations, among other things, also serves to inform future programmes (there may be exceptions).

I will appreciate if you or any other member of Better Evaluation team clarify this confusion for me. 
Thank you