52 weeks of BetterEvaluation: Week 4: Including unintended impacts

Patricia Rogers's picture 24th January 2013 by Patricia Rogers

Evaluation is not just about assessing whether objectives have been met.  Identifying and considering unintended impacts can be a critically important part of deciding whether or not a program, a policy or a project has been a success.  But not all guides to evaluation acknowledge the importance of unintended impacts – or give advice about methods to identify and include them.

This week 52 weeks of Better Evaluation looks at why it is important to include unintended impacts and how this can be done.

Why is it important to include unintended impacts in an evaluation?

If a program (or policy or project) has significant negative unintended impacts then it should not be judged as a success. 

For example, Annie Kelly recently reported in The Guardian on the N2 highway in Bangladesh, which was intended to improve productivity and economic growth by connecting Dhaka to the growing city of Sylhet.  Apart from a short section of the highway, no safety features were built into the road, resulting in a significant increase in transport deaths and injuries:

“.. road safety was not factored into the economic calculations of its loan package to upgrade the road. It was simply named as an "additional road-user benefit" alongside "improved riding comfort".”

In a recent evaluation of job placement programs, Bruno Crépon, Esther Duflo, Marc Gurgand, Roland Rathelot and Philippe Zamora found that, while it had been successful in placing participants in employment, this had been at the cost of displacing other applicants, resulting in no net improvement in employment. If they had only looked at the program in terms of its impact on intended beneficiaries, this important unintended impact would not have been identified and measured 0- and the overall assessment of the value of the program would have been wrong.

Conversely, if a project has significant positive unintended impacts it might not be a failure, even if it fails to achieve the intended outcomes. The history of science is filled with examples – such as the invention of Post-It notes, which came from a failed experiment to develop a super-strong glue.

Image credit: DangApricot (Erik Breedon)

Do evaluation guides recognize the importance of unintended impacts?

Some guides to evaluation acknowledge the importance of unintended outcomes and impacts. 

For example the DAC criteria include unintended impacts in their definition of impact.

Impact: the positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the activity on the local social, economic, environmental and other development indicators. The examination should be concerned with both intended and unintended results and must also include the positive and negative impact of external factors, such as changes in terms of trade and financial conditions.

In the UK, Charities Evaluation Services provides advice on evaluation questions that clearly includes unintended impacts:

  • What has happened as a result of the programme or project?
  • What real difference has the activity made to the beneficiaries?
  • How many people have been affected?
  • How well are we meeting identified needs?
  • How well have we met our expected outcomes?
  • What were the unexpected outcomes?

But not all guides to evaluation recognize the importance of unintended impacts.  Some use a narrow definition of evaluation that is limited to intended impacts:

Evaluation measures how well the program activities have met expected objectives ”

How to identify and include unintended outcomes and impacts

If unintended impacts are so important, how can they be identified and included?

In many cases potential negative impacts can be readily identified using one of the options described on the BetterEvaluation site:

  • Key informant interviews: asking experienced people to identify possible negative impacts, based on their experience with similar programs. Program critics can be especially useful .
  • Negative program theory: drawing a logic model teasing out the causal paths by which the program could produce negative impacts – either the reverse of intended impacts (decreased student learning rather than increased student learning) or a different type of impact (for example negative environmental impacts from a program seeking positive economic impacts.
  • Risk assessment: identifying the potential negative impacts, their likelihood of occurring and how they might be avoided.
  • Six Hats Thinking: asking someone to take on the “black hat” role and identify risks and potential problems. 

But not all negative impacts (or unintended positive impacts) can be identified in advance.

Jonny Morell, in his book and workshop on Evaluation in the Face of Uncertainty suggests that programs which are innovative, or being implemented for the first time in a different context, are more likely to produce impacts which are both unintended and unanticipated.

It is therefore important to have some elements of your M & E plan that are able to gather evidence about unanticipated impacts – such as open-ended questions in questionnaires and interviews, that ask about  other things that have happened as a result of the program, and observation schedules that include scope to add observation of additional matters that were not planned.

52 Weeks of BetterEvaluation. Click here to view past features

A special thanks to this page's contributors
Author
CEO, BetterEvaluation.
Melbourne.

Comments

There are currently no comments. Be the first to comment on this page!

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.