The Situationist

Jonathan Haidt on the Situation of Moral Reasoning

Posted by The Situationist Staff on June 17, 2008

Happiness HypothesisWe recently published a post called the “Moral Psychology Primer,” which briefly highlighted the emerging work of several prominent moral psychologists, including Professor Jonathan Haidt from UVA. Haidt’s important work is relevant to law, morality, and positive psychology – all topics of interest to The Situationist. We thought it made sense, therefore, to follow up the primer with some choice excerpts from Jon Haidt’s terrific book, The Happiness Hypothesis. (We are grateful to Professor Haidt for his assistance in selecting some of these excerpts.)

* * *

I first rode a horse in 1991, in Great Smoky National Park, North Carolina. I’d been on rides as a child where some teenager led the horse by a short rope, but this was the first time it was just me and a horse, no rope. I wasn’t alone—there were eight other people on eight other horses, and one of the people was a park ranger—so the ride didn’t ask much of me. There was, however, one difficult moment. We were riding along a path on a steep hillside, two by two, and my horse was on the outside, walking about three feet from the edge. Then the path turned sharply to the left, and my horse was heading straight for the edge. I froze. I knew I had to steer left, but there was another horse to my left and I didn’t want to crash into it. I might have called out for help, or screamed, “Look out!”; but some part of me preferred the risk of going over the edge to the certainty of looking stupid. So I just froze. I did nothing at all during the critical five seconds in which my horse and the horse to my left calmly turned to the left by themselves.

As my panic subsided, I laughed at my ridiculous fear. The horse knew exactly what she was doing. She’d walked this path a hundred times, and she had no more interest in tumbling to her death than I had. She didn’t need me to tell her what to do, and, in fact, the few times I tried to tell her what to do she didn’t much seem to care. I had gotten it all so wrong because I had spent the previous ten years driving cars, not horses. Cars go over edges unless you tell them not to.

Human thinking depends on metaphor. We understand new or complex things in relation to things we already know. For example, it’s hard to think about life in general, but once you apply the metaphor “life is a journey,” the metaphor guides you to some conclusions: You should learn the terrain, pick a direction, find some good traveling companions, and enjoy the trip, because there may be nothing at the end of the road. It’s also hard to think about the mind, but once you pick a metaphor it will guide your thinking.

* * *

Modern theories about rational choice and information processing don’t adequately explain weakness of the will. The older metaphors about controlling animals work beautifully. The image that I came up with for myself, as I marveled at my weakness, was that I was a rider on the back of an elephant. I’m holding the reins in my hands, and by pulling one way or the other I can tell the elephant to turn, to stop, or to go. I can direct things, but only when the elephant doesn’t have desires of his own. When the elephant really wants to do something, I’m no match for him.

* * *

The point of these studies is that moral judgment is like aesthetic judgment. When you see a painting, you usually know instantly and automatically whether you like it. If someone asks you to explain your judgment, you confabulate. You don’t really know why you think something is beautiful, but your interpreter module (the rider) is skilled at making up reasons, as Gazzaniga found in his split-brain studies. You search for a plausible reason for liking the painting, and you latch on to the first reason that makes sense (maybe something vague about color, or light, or the reflection of the painter in the clown’s shiny nose). Moral arguments are much the same: Two people feel strongly about an issue, their feelings come first, and their reasons are invented on the fly, to throw at each other. When you refute a person’s argument, does she generally change her mind and agree with you? Of course not, because the argument you defeated was not the cause of her position; it was made up after the judgment was already made. If you listen closely to moral arguments, you can sometimes hear something surprising: that it is really the elephant holding the reins, guiding the rider. It is the elephant who decides what is good or bad, beautiful or ugly. Gut feelings, intuitions, and snap judgments happen constantly and automatically . . . , but only the rider can string sentences together and create arguments to give to other people. In moral arguments, the rider goes beyond being just an advisor to the elephant; he becomes a lawyer, fighting in the court of public opinion to persuade others of the elephant’s point of view.

* * *
In my studies of moral judgment, I have found that people are skilled at finding reasons to support their gut feelings: The rider acts like a lawyer whom the elephant has hired to represent it in the court of public opinion.

One of the reasons people are often contemptuous of lawyers is that they fight for a client’s interests, not for the truth. To be a good lawyer, it often helps to be a good liar. Although many lawyers won’t tell a direct lie, most will do what they can to hide inconvenient facts while weaving a plausible alternative story for the judge and jury, a story that they sometimes know is not true. Our inner lawyer works in the same way, but, somehow, we actually believe the stories he makes up. To understand his ways we must catch him in action; we must observe him carrying out low-pressure as well as high-pressure assignments.

* * *

Studies of everyday reasoning show that the elephant is not an inquisitive client. When people are given difficult questions to think about—for example, whether minimum wage should be raised—they generally lean one way or the other right away, and then put a call in to reasoning to see whether support for that position is forthcoming. . . . Most people gave no real evidence for their positions, and most made no effort to look for evidence opposing their initial positions. David Perkins, a Harvard psychologist who has devoted his career to improving reasoning, has found the same thing. He says that thinking generally uses the “makes-sense” stopping rule. We take a position, look for evidence that supports it, and if we find some evidence—enough so that our position “makes sense”—we stop thinking. But at least in a low-pressure situation such as this, if someone else brings up reasons and evidence on the other side, people can be induced to change their minds; they just don’t make an effort to do such thinking for themselves

* * *

Studies of “motivated reasoning” show that people who are motivated to reach a particular conclusion are even worse reasoners than those in Kuhn’s and Perkins’s studies, but the mechanism is basically the same: a one-sided search for supporting evidence only. . . . Over and over again, studies show that people set out on a cognitive mission to bring back reasons to support their preferred belief or action. And because we are usually successful in this mission, we end up with the illusion of objectivity. We really believe that our position is rationally and objectively justified.

Ben Franklin, as usual, was wise to our tricks. But he showed unusual insight in catching himself in the act. Though he had been a vegetarian on principle, on one long sea crossing the men were grilling fish, and his mouth started watering:

I balanc’d some time between principle and inclination, till I recollectd that, when the fish were opened, I saw smaller fish taken out of their stomachs; then thought I, ‘if you eat one another, I don’t see why we mayn’t eat you.” So I din’d upon cod very heartily, and continued to eat with other people, returning only now and then occasionally to a vegetable diet.

Franklin concluded: ‘So convenient a thing is it to be a reasonable creature, since it enables one to find or make a reason for every thing one has a mind to do.’

* * *

For a sample of related Situationist posts, see “The Situation of Reason,” “I’m Objective, You’re Biased,” “Mistakes Were Made (but not by me),” and “Why We Punish.”

[Special thanks and welcome to Elizabeth Johnston, our newest Situationist Fellow, for drafting this post.]

One Response to “Jonathan Haidt on the Situation of Moral Reasoning”

  1. […] The answer may lie in the fact that our moral reasoning does not rest in an evidentiary basis. As Jonathan Haidt notes, “Most people gave no real evidence for their positions, and most made no effort to […]

Leave a comment