The 4th edition of Qualitative Research and Evaluation Methods by Michael Quinn Patton will be published in mid-November, 2014. A new feature is one personal “rumination” in each chapter. In these ruminations, Patton reflects on issues that he explains have “persistently engaged, sometimes annoyed, occasionally haunted, and often amused me over more than 40 years of qualitative research and evaluation practice. In these ruminations I state my case on the issue and make my peace with it.” Because these issues have global relevance, Patton has agreed to post his ruminations on Better Evaluation to stimulate further reflection among evaluators generally.
Researchers and evaluators are admonished to stay rational and independent. Don’t get emotional. Feelings are the enemy of rationality and objectivity. Emotions and feelings lead to caring—and caring is a primary source of bias. Stay distant and unfeeling. Caring emerges from connecting to people, an empathic sense of interdependence rather than independence. So avoid connection and caring, eschew empathy, maintain rationality and independence, and you can avoid bias, the greatest of scientific failings.
I hear this view expounded regularly when qualitative findings are attacked for being biased because the researcher or evaluator got close to the people studied and took on the responsibility of communicating their point of view. Early in my career, I was admonished in a public forum by a distinguished university professor who disagreed with the qualitative findings on an innovative education program:
Your results can’t be trusted because you went native. You obviously spent lots of time with them. You totally bought into what those people told you. You’ve lost all objectivity. You call it empathy. True scientists call it bias.
At that time, I had no ready response. Today, I do, and I will share it at the end of this rumination.
As I’ve experienced versions of this confusion between empathy and bias over the years, I get the sense that the vociferousness of the attack, and it is often quite vehement, stems from a deep-seated fear of emotions and human connection by those dismissing qualitative data.
So What About the Role of Emotions in Scientific Inquiry?
Brian Knutson is a professor of psychology and neuroscience at Stanford University. Knutson (2014) makes the case that scientific inquiry should incorporate emotions as a source of data and insight into the nature of the human experience.
The absence of emotion pervades modern scientific models of the mind. In the most popular mental metaphors of social science, mind as reflex (from behaviorism) explicitly omits emotion and mind as computer (from cognitivism) all but ignores it. Even when emotion appears in later theories, it is usually as an afterthought—an epiphenomenal reaction to some event that has already passed. But over the past decade, the rising field of affective science has revealed that emotions can precede and motivate thought and behavior.
Emerging physiological, behavioral, and neuroimaging evidence suggests that emotions are proactive as well as reactive. Emotional signals from the brain now yield predictions about choice and mental health symptoms, and may soon guide scientists to specific circuits that confer more precise control over thought and behavior. Thus, the price of continuing to ignore emotion’s centrality to mental function could be substantial. By assuming the mind is like a bundle of reflexes, a computer program, or even a self-interested rational actor, we may miss out on significant opportunities to predict and control behavior—both in individuals and groups.
Literally and figuratively, we should stop relegating emotion to the periphery, and move emotion to the center—where it belongs.
An Anecdote About Empathy and Bias
MQP Rumination #1 concerned undervaluing anecdotes as a form of potentially useful data. So let me share an anecdote that illustrates the importance of human empathy as a source of understanding and making sense of the world. It is an anecdote about the nature of bias told by Tom Griffiths, Professor of Psychology and Cognitive Science, University of California, Berkeley, and Director of the Institute of Cognitive and Brain Sciences.
It’s easy to discover the biases that have been built into speech recognition software. I once left my office for a meeting, locking the door behind me, and came back to find a stranger had broken in and typed a series of poetic sentences into my computer. Who was this person, and what did the message mean? After a few spooky, puzzling minutes, I realized that I had left my speech recognition software running, and the sentences were the guesses it had produced about what the rustling of the trees outside my window meant. But the fact that they were fairly intelligible English sentences reflected the biases of the software, which didn’t even consider the possibility that it was listening to the wind rather than a person. (Griffiths, 2014)
Cultivating Empathic Skills and Appreciating the Appropriate Use of Bias
Computers, at least so far, lack the capacity for empathy—or even bias. Biased human beings import bias into software. The distinguished philosopher of science and evaluation research pioneer Michael Scriven concluded his volume on Hard-Won Lessons in Program Evaluation (1993) with astute observations about both empathy and bias. First, empathy:
The most difficult problems with program evaluation are not methodological or political but psychological. . . . What is lacking is the ability to see the point of view of those on the receiving end of the evaluation [intended beneficiaries of the program]—the lack of empathic skills [italics added]—and that is just as important a failing. (p. 87)
Scriven (1993) also commented on the common fallacy of defining bias as a lack of belief in or concern about something:
Preference and commitment do not entail bias.
It is crucial to begin with a clear idea of the difference between bias in the sense of prejudice, which means a tendency to error, and bias in the . . . sense of preference, support, endorsement, acceptance, or favoring of one side of an issue. Only the first of these senses is derogatory, and in the legal context the term bias is restricted to the first sense. From none of the synonyms for the second sense can one infer prejudice, because the preference, support, and so on may be justified. It is insulting, and never tolerated in a court of law where these matters are of the essence, to treat someone who has preferences as if they are thereby biased (and hence not a fair witness). It is especially absurd in the science, mathematics, engineering, and technology (SMET) area to act as if belief in [something] shows bias. Bias must be shown, either by demonstrating a pattern of error or by demonstrating the presence of an attitude that definitely and regularly produces error. . . .
People with knowledge about an area are typically people with views about it; the way to avoid panels of ignoramuses or compulsive fence sitters is to go for a balance of views, not an absence of views. (pp. 79–80)
Emotion and Reason
When I was in graduate school, we were constantly warned that emotion was the enemy of reason. Now, based on the latest research on how we as humans make decisions, brain research, and cognitive science, we know that emotion is not opposed to reason; our emotions assign value to things and are the basis of reason (Brooks, 2011; Patton, 2014). “Emotive traits” like “empathetic sensitivity” are not barriers to scientific inquiry about the human experience; rather, the capacity for empathy enhances, enriches, and deepens human understanding (Brooks, 2011, 2014).
Nowadays, in the face of attacks on my qualitative findings as biased because I got close enough to people to feel empathetic, I assume the stance of an old man feigning calm and a statesman-like attitude, rather than displaying the passion and defensiveness of youth, and I say,
I’m sorry you feel that way. Oops! I didn’t mean to use the verb feel. But it must be a terrible thing to be so afraid of feelings and human connections. How much of human experience you miss by staying so doggedly and dogmatically in your head. But I certainly understand why you can’t relate to and don’t understand my findings. You detect bias. I detect empathic atrophy. Such a tragic loss. My condolences.
Brooks, D. (2011). The social animal: The hidden sources of love, character, and achievement. New York, NY: Random House.
Brooks, D. (2014, May 6). The streamlined life. The New York Times, p. A25
Patton, M. Q. (2014). What brain sciences reveal about integrating theory and practice. American Journal of Evaluation, Vol. 35(2), 237-244.
Scriven, M. (1993). Hard-won lessons in program evaluation. New Directions for Program Evaluation, No. 58. San Francisco, CA: Jossey-Bass
For any evaluation there needs to be clarity about what will be considered a quality and ethical evaluation. In some organisations there is agreement about using particular evaluation standards and/or ethical guidelines to guide the evaluation and to evaluate it. Read more.
Cultural competence is not acquisition of certain skills or knowledge but an attitude towards a culture. A culturally competent evaluator is the one who is ready to engage with various sections of communities to embrace cultural and contextual dimensions important to the evaluation. Critical self reflection could be considered the first building block towards accomplishing cultural competence as it is not only the cultural context of the evaluee that needs consideration but values, belief and culture of the evaluators are also important. Read more.
RSA Animate: The Empathic Civilisation Best-selling author, political adviser and social and ethical prophet Jeremy Rifkin investigates the evolution of empathy and the profound ways that it has shaped our development and our society. Taken from a lecture given by Jeremy Rifkin as part of the RSA's free public events programme.