Week 26: Weighing people’s values in evaluation

Laura Rodriguez's picture 30th June 2014 by Laura Rodriguez

What is more important to you: a good education or a good healthcare system? Or perhaps employment or security is at the forefront of your mind at the moment. What about the environment or human rights? We all have different priorities in life and different sets of values with which we make judgements on things around us. Evaluations attempting to understand effects on people’s lives should at least attempt to try to understand the values of those people rather than imposing an external set of values. This week’s guest blog is from Laura Rodriguez Takeuchi, a researcher at the Overseas Development Institute. She introduces some practical ways that evaluators can begin to weigh people’s values as they relate to desired outcomes and distribution of benefits.

One of the very first steps of for conducting an evaluation is to determine what ‘success’ looks like. BetterEvaluation suggests that there could be inherent success criteria (for example, explicit programme objectives or sector-wide standards) or success criteria that emerge from tacit values, and these often have to be negotiated among different groups of people.

The process of defining and agreeing the values which will be used to form the judgements is often one of the more mysterious and less systematic parts of evaluation. Moreover, the question of whose values matter is sometimes overlooked.

Many programme and policy evaluations attempt to understand the effects of interventions on people’s lives without first understanding what wellbeing and satisfaction really mean to those people. Wellbeing is not a straightforward concept to grasp, let alone measure. It is comprised of multiple aspects and although there is a growing consensus on the set of things that are important in people’s lives, there is less agreement on how to capture the relative importance of each one. A new ODI paper explores some of the alternatives to incorporate people’s values in an index of wellbeing, which could provide a useful tool for budget allocation and evaluation of interventions across different sectors.

In the absence of specific information about the values attached to different aspects of wellbeing, a commonly used approach is to set equal weights for various dimensions. This is the approach taken in the Human Development Index and the Multi-Dimensional Poverty Index, assuming that all domains – health, education, income/living standards - have equal importance on a person’s overall wellbeing. This can’t be assumed - it is a matter for empirical testing to determine whether this is true for different people (and some have found that it is not ).

What if we asked people instead and developed weights from their responses? This doesn’t just mean having a chat – it can and should be done in a rigorous, methodical way. There are a number of approaches for eliciting values and in the paper we review six of them currently used in the UK health sector, but which have potential to be applied in the context of broader development interventions (see table 1 and appendix 1 in the paper for more info on these):

  • Standard gamble
  • Time trade-off
  • Person trade-off
  • Discrete choice experiments
  • Rating scales
  • Swing weights

The general idea is to ask people about hypothetical case scenarios, and to derive from those responses the weights that reflect how people value different domains of wellbeing. For example, in a discrete choice experiment (DCE) respondents are asked to select one of two hypothetical persons with different poverty profiles which they would consider most worthy of public support (figure below). When a large number of these responses are gathered, for a combination of different hypothetical persons, it is possible to determine which of the domains of wellbeing people place a higher value on.

Answering these questions is not easy (try selecting a person from the example above), and tools need to be carefully developed, tested and adapted to different needs and contexts. The fact that the health sector already uses them quite widely, should encourage us to think that the methods can be expanded and applied to other areas of development.

A more important issue is to recognize that different people may hold different opinions, so arriving at a single set of values may not often be possible, and that the aim perhaps should be to determine a plausible range of values. It is nevertheless important to recognise the values of different groups of people, for example, donors and communities affected by a development intervention, or groups of specific beneficiaries of a programme (women, children, those living in a certain area, etc.), to be able to have a sense of how interventions may affect their wellbeing in different ways. Group-based methodologies for eliciting weights can also be used, but have their own set of difficulties, especially when power dynamics are involved.

If development is about making people’s lives better, we surely need to ask them first about what they value the most. There is wide range of methodologies available to do it in a rigorous way, and careful thought needs to be given to selecting an appropriate method that is feasible for the context in which it is to be applied. Other methodologies, which are not based on people’s responses, are still useful, and can be incorporated in other steps of the evaluation process, but the value question in evaluation cannot be ignored.

Image: Chestnut hawker, by Canadian Pacific/Flickr

A special thanks to this page's contributors
Research Officer, ODI.
London, United Kingdom.


David Week's picture
David Week

Hi Laura. It's heartening to see the increased emphasis in development on the importance of the beneficiary's values in defining the success or failure of development interventions. This is another step on a long road we have walked since the birth of the development industry post-WW2.

I understand from the paper, and your title, that you describe weighting based on "people's perception of values." You also make mention of tacit values, and eliciting those values.

I have a concern with the idea that any kind of questionnaire or artificial scenario is a valid way of ascertaining tacit values. All you well get is people's perceptions of values: not their real values.

Tacit values are a subset of tacit knowledge. The origin of the idea goes back to Polanyi, and the classic example is the bicycle. Now, here's a question: When you ride a bicycle, which way do you turn the handle bars in order to turn right? The common answer is "to the right". The correct answer is "to the left". It's physically impossible to turn right by turning the handle bars to the right: you'll fall over. What we in fact do is turn left, which causes the bike to start to fall to the right, and then turn right to complete the turn. 

The point here is that—almost by definition—there is a gap between what people think they know (or value) and what they actually know (or value), as shown by what they do. In market research, for instance, people's values is often measured by looking at how disposable income is spent. Not perfect, but a step towards assessing value according to real-world behaviour, rather than questions: according to action, not words.

Tacit knowledge is embodied knowledge. It is not there "in your head" ready for access by a researcher. What's "in your head", and comes out via questioning, is what you think you value, or what you think you should value. Actual, lived values are completely different matter.


Laura Rodriguez's picture
Laura Rodriguez Takeuchi

Hi David,

Thanks for your comment, being aware of the difficulties in the methods presented is important, and some of the most salient ones are also reviewed in the paper. There are also other methods, used in environmental economics and market research-as you point out, which are not included in the paper. They have other limitations. People's 'real' choices may not represent their ideal preferences or values, but rather are made in a constrained context.

That is not to say that different methods are not useful in different parts of an evaluation. For instance, when trying to narrow down indicators to measure the different domains of wellbeing, statistical techniques -which have nothing to do with asking people- could be helpful. 

I think asking people has a value of its own, especially in the context of wellbeing. The key message is that regardless of the method, it is important to recognise the different values around an evaluation. The criteria for success, in my view, has to incorporate the values of those who are affected by a programme or policy intervention.

Laura Rodriguez's picture
Laura Rodriguez Takeuchi

Hi Rick,

Exactly that is the point I was trying to make before. People do not always act under the rationality principles assumed in economic models, so we cannot assume that their choices (for example their budget allocations), are reflective of this. Even economists recognise this now, which is why, I argue, it is worth exploring the methods presented in the paper.

On your second point, I think weighted checklist are very similar to ratings scales in their design. They are useful indeed, and your example of the Basic Necessities Surveys in Vietnam is an interesting one, thank you for sharing it. The only difficulty is perhaps, that the trade-offs we are looking for are less explicit in such tasks, since each component is assessed individually. They are useful in group exercises, when discussions among the group are part of the exercise and help to assess the relative importance of each domain.

David Week's picture
David Week

Thank you both for helping me chrystalize an unease that I felt about the langauge of economists, but couldn't pinpoint. I now see it like this:

- when a biologist builds a model of human behaviour, and the model doesn't match the behaviour, she says the model is wrong.

- when an economist builds a model of human behaviour, and the model doesn't match, she says the behaviour is wrong.


David Week's picture
David Week

Hi Steve

I don't believe in souls.

My point is this: asking people what they think is an extraordinarily shallow approach to understanding human beings. Here is the values statement of Enron: respect, integrity, communication, excellence. Here it is a formal instruction from the CEO at the time, Ken Lay: http://www.agsm.edu.au/bobm/teaching/BE/Cases_pdf/enron-code.pdf
They weren't kidding around, or snickering. This was their discourse at the time.

This is a documentary about Enron's real values, as exhibited in their behaviour: https://www.youtube.com/watch?v=gxzLX_C9Z74

This is why psychologists and sociologists go to extraordinarily lengths to design experiments which avoid people's self-assessment and self-reporitng. Examples that come to mind Harvard's Implicit Association Test, Kahnemann's behavioural economics, the raft of experimentation on the reliability of eye-witness testimony, Elizabeth Loftus' experiments on memory, most of Zimbardo's or Asch's experiments… and here I'm only talking about the famous ones. But I don't read many cases of social or psychological research these days in which the subjects are being asked directly about the topic of the research, or even know what the research is about.

This is because:

- people's self-reporting is biased to make themselves look good

- what they honestly think they do is not what they actually do

- what they honestly think they saw is not what happened

- what they honestly remember happening often never happened.

The reason I am concerned here is not just because I'm some kind of methods junky, but because this research will be used in the service of a development model in which data is extracted from a population, processed by social scientists in the pay of donors, used to construct programs by donor experts, and then rolled out to those populations, without the explicit understanding and consent of the populations in involved.

If you want to mobilise human values in governance, here's what I recommend: don't study the population. Involve them in direct, informed and meaningful ways in making program selection and decision-making. Then not not only you will see "values in action", but you will also have an ethical situation in which the people who decide what happens to them are same people who are affected by what happens them.

David Week's picture
David Week

I agree with Rick.

First, the language of rational and irrational is not about "we have this standard model, which is obviously an oversimplification but in some cases it is quite good at predicting economic phenomena". If that were the case you would not talk about the behaviour of the human beings, but the accuracy of the model.

Second, rational / irrational is heavily loaded language, with the connotation in normal use that rational = good and irrational = bad. I think the whole use of such language is unhelpful, and economics seems to be the last of the social sciences to be going through the process of understanding that reality takes preference over the model, and if there is a deviance, it is the model that is defective.

An old economics joke has an economist objecting: "Yes, yes, that's all very well in practice: but how does it work in THEORY?"

Finally, in development we work with people from many different societies, in which what is "rational" varies. In Aceh, Islam is considered rational and irreligion considered irrational; in Melbourne, not necessarily so. Even within our own culture, what is considered rational shifts with time and tides: it is never wholly agreed upon, and changes over time. A classic case is the supposed cognitive "bias" against losing in otherwise "rational" 50/50 bets… which has been since revised to be very rational if you include that losing your food supply means death, whereas doubling it means a lot of wasted food. Therefore in development in particular, it's inappropriate to start categorising human behaviour as either rational or irrational.

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.