Week 22: Using evaluation in programme design – a funder’s perspective

Stephen Porter's picture 3rd June 2014 by Stephen Porter

Stephen Porter is Results and Evaluation Advisor for the Education and Partnerships team at DFID. In this blog he gives us a valuable insight into what a funder might be thinking as they review a development programme proposal and how he uses evaluation evidence to make funding decisions. In comparing the information that comes from (traditional) systematic reviews to that which comes from a realist synthesis, he urges us all to think hard about the ‘how’ of development interventions, particularly in livelihoods interventions.

As someone from DFID who has to make recommendations about which proposals to fund, the first feeling I get when I sit down to a new batch of proposals is trepidation. Will they be well written, but missing something essential? Will it be poorly articulated, but with a good central idea? Am I going to respond fairly and give good feedback?

Evidence has become a mainstay of development programming and consequently the funding review process. With many large granting organisations emphasising what they call ‘an evidence-based approach’, increasingly sophisticated demands for evidence to support proposals are going to be required. Yet a key piece of evidence that can tip projects into the ‘Yes’ pile for funding is rarely available: often proposals cannot or do not use evidence of how something works.

Recently evidence gathering in development has been focused on getting more evidence on what works. Deworming, for example, appears to be good idea to increase attendance at schools, but how to implement deworming in different contexts so that it complements other interventions and supports sustainable change is a little more challenging.

My experience is that the evidence gap of how an intervention works becomes thinner in livelihoods proposals. Livelihoods interventions are often complex and multifaceted linking empowerment, microfinance and household production, yet evidence on such interventions against high academic standards is patchy.

Using systematic reviews and realist syntheses to understand the how

During a recent review process I turned to systematic reviews to augment my appreciation of proposed interventions and found a greater number than expected. The current systematic reviews were useful in identifying a variety of benefits of different interventions (for example, here, here and here). These kinds of reviews are important, but they were less useful helping to understand how different interventions interact or how well they worked when scaled-up in real world conditions.

Development projects are often nuanced in the effects they are seeking to support; it can be difficult to find systematic reviews or relevant evaluations designs that provide guidance for multi-faceted programme designs, such as those in livelihoods. Besides even if these existed they may just tell you that they are likely to work, not how to get them to work in your context. In other words, they only provide part of evidence required for strengthening a project design and by extension the proposal.

Realist evaluation and syntheses are an example of a form of evidence that still has a good degree of academic rigour. A realist evaluation provides evidence of how interventions work by identifying causal mechanisms that are activated given certain contextual conditions. In a realist synthesis mechanisms are identified that can be targeted across a range of contexts to improve the chance of success. Where they exist, realist evaluations and synthesis can help programme designers to identify where current interventions can be adapted and where additional project partners are needed to bring-in new niche skills.

Comparing some of the systematic reviews and realist synthesis consulted during a recent round of programme proposals reveals the differences.

Sector: Water, Sanitation and Hygiene (WASH)

Systematic Reviews

Realist reviews

All of the [WASH] interventions studied were found to reduce significantly the risks of diarrhoeal illness. Most of the interventions had a similar degree of impact on diarrhoeal illness, with the relative risk estimates from the overall meta-analyses ranging between 0·63 and 0·75. The results generally agree with those from previous reviews, but water quality interventions (point-of-use water treatment) were found to be more effective than previously thought, and multiple interventions (consisting of combined water, sanitation, and hygiene measures) were not more effective than interventions with a single focus.

Elements of the following mechanisms can be built into programme design either through partnerships or recognising assets within the organisation.

WASH interventions alleviate determinants of different sources of ill-being. Multiple benefits (health, time and expense saved by more accessible services), in addition to diarrhoea reduction, may be realised as a direct consequence of the intervention.

Operational decisions may be affected by political influence, corruption, ease of access. Wealthier and healthier groups have a greater influence.

The intervention may have spillover effects both in terms of the use of water, but also in the transfer of information to other communities and partners.

Sector: Nutrition

Systematic Reviews

Realist reviews

The main objective of the review was to determine the effectiveness of school feeding programs in improving physical and psychosocial health for disadvantaged school pupils. Attendance in lower income countries was higher in experimental groups than in controls; our results show an average increase of 4 to 6 days a year. Maths gains were consistently higher for experimental groups in lower income countries.

Process factors that seem to enhance efficacy of school feeding programmes:

The target group has clear nutritional deficiency (usually, inadequate energy intake) and trial is oriented to correcting this rather than to short term hunger relief

Well organized schools that form part of an efficient distribution chain for the supplement

Interventions developed with local teams rather than designed by distant experts

The supplement is piloted to exclude intolerance and confirm palatability and acceptability

Measures are in place to ensure that the food supplement is consumed (eg close supervision of eating)

In disaffected young people, attention is paid to social aspects of the meal

As can be seen in the above table, the systematic reviews accessed tell you more what works; the realist synthesis tell you about mechanisms that affect the potential changes that an intervention can realise. Recent moves in experimental and quasi-experimental designs are now emphasising questions around mechanisms. So in future that should be more evidence on how and why. Currently, realist evaluations and synthesis is useful as a check against the proposal: Has the proposal highlighted how it intends to affect or mitigate a certain mechanism? Has the proposal missed any of the main mechanisms? Answering these and similar questions does not guarantee a proposal’s success, it just indicates that the design of the programme has been informed by a reflection on evidence of how the intervention works.

So this blog finishes with two requests. First, that we should aim to conduct more evaluations that answer how questions, whether realist, experimental or systematic reviews. There is surprisingly little out there and what there is seems to be more in the medical field. Second, when putting together development programmes do not just write about what works, or what has worked for you; try to write about how something worked and use sources of evidence as verification.

Image: Livelihood activities in Shyamnagar Upazila, Bangladesh. Photo by Sami A. Khan, 2012.

A special thanks to this page's contributors
Author
Results and Evaluation Advisor , DFID.
London, United Kingdom.

Comments

BernadetteWright's picture
Bernadette Wright

Yes. Knowing how much impact we had on our goals is often important. However, to replicate and improve upon effective practices and focus resources where they can do the most good, we also need to understand how and what it takes to get from where we are to the successful place we want to be. Thanks!

jcalegre's picture
Juan-Carlos Alegre

I agree that in most development programmes, knowing the "HOW" has become critical in delivering proven technical interventions. But as also mentioned by Stephen, it is important to recognise UNDER WHAT CONDITIONS (the context) and FOR WHOM interventions are working or not. Even in development sectors where there are many evidence-based interventions already proven to be effective (i.e., global health with known public health interventions that save lives), contexts and mechanisms to activate outcomes are key for achieving success. As mentioned also in the blog, there is still a long way to go in systematically documenting the "how", "under what conditions" and "for whom" interventions are working or not. This is why "implementation research" is becoming quite important for documenting what is working (or not), how, why, under what conditions and for whom as an attempt to obtain answers to these questions before a formal realist evaluation gets conducted. Answering those questions will greatly shed light about how to best transfer (as opposed to replicate) interventions that will have better chance to work in the many complex settings where we work.

Jerim Obure's picture
Jerim Obure

In my experience, demonstration of evidence in evaluation processes and in subsequent proposals is often feeble because most projects/programs do not inculcate evaluative thinking right at the onset of the project, particularly at the design stage. Most organisations (donors and grantees alike) still perceive a good evaluation to be an expensive investment that can be ignored at the beginning of the project until when it must be done at the end; this explains the scarcity of comprehensive 'ex ante' evaluation too. When the ex-post evaluation is done, it gets the organisation/project in a situation of learning same lessons over and over, with little framework for evidence building and documentation! So,poor set up for evaluation at design, leads to poor evidence at ex post evaluation, then to scanty evidence in subsequent proposals! So, how to deal with 'real synthesis' against the challenge of requisite but weak ex ante set up, and merits of demonstrative evidence in the next funding loop! Interesting discussion!

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.