The Situationist

Posts Tagged ‘moral psychology’

Interview with Professor Joshua Greene

Posted by The Situationist Staff on September 26, 2010

From The Project on Law & Mind Sciences at Harvard Law School (PLMS):

Here is an outstanding interview of Joshua Greene by Harvard Law Student Jeff Pote. The interview, titled “On Moral Judgment and Normative Questions” lasts just over 58 minutes. It was conducted as part of the Law and Mind Science Seminar at Harvard.

Bio:

Joshua D. Greene is an Assistant Professor of Psychology at Harvard University. He received his A.B. at Harvard University in 1997 where he was advised by Derek Parfit. He received his PhD in Philosophy at Princeton University in 2002 having written a dissertation on the foundation of ethics advised by David Lewis and Gilbert Harman. From 2002 to 2006, when he began at Harvard, he studied as a postdoctoral fellow at Princeton in the Neuroscience of Cognitive Control Laboratory under Jonathan Cohen. He is currently the Director of the Moral Cognition Lab.

* * *

* * *

Table of contents:

  • 00:00 — Title Frame
  • 00:23 — Introduction
  • 00:54 — How did your professional interests develop?
  • 04:58 — What are the questions that interest you?
  • 06:07 — What research projects are you currently working on?
  • 08:32 — Could you describe the original experiment that supported a dual-process view of moral judgment?
  • 13:13 — Has further research supported the dual-process view of moral judgment?
  • 16:43 — Could you explain how this, or any, psychological understanding could bear on normative questions of law and policy?
  • 24:39 — Could you provide an example of a situation where we should not rely on “blunt intuition?”
  • 30:42 — Can you see other places where psychological research illuminates normative questions of law or policy?
  • 37:40 — Do any of our moral judgments represent an objective moral reality (or moral facts)?
  • 44:38 — Could you provide an example of a “moral objectivist” solution that you find unpersuasive?
  • 49:33 — What is the problem of “free will” and what is its relevance for legal responsibility and punishment?
  • 56:26 — How will this emerging scientific understanding of the human animal affect law and moral philosophy?

Duration: 58:04

* * *

For a sample of related Situationist posts, see “Joshua Greene To Speak at Harvard Law School,” “2010 Law and Mind Sciences Conference,”  The Interior Situation of Honesty (and Dishonesty),” “Moral Psychology Primer,” Law & the Brain,” “Pinker on the Situation of Morality,” “The Science of Morality,” and Your Brain and Morality.”

Posted in Experimental Philosophy, Morality, Neuroscience, Video | Tagged: , , , | 2 Comments »

Fiery Cushman at Harvard Law School

Posted by The Situationist Staff on September 20, 2009

SALMS Logo Small 2 for WebsiteTomorrow (Monday, September 21), the Student Association for Law and Mind Sciences (SALMS) at Harvard Law School is hosting a talk, titled “Outcome vs. Intent: Which Do We Punish and Why?,” by Professor Fiery Cushman. The abstract for the talk is as follows:

Sometimes people cause harm accidentally; other times they attempt to cause harm, but fail. How do ordinary people treat cases where intentions and outcomes are mismatched? Dr. Cushman will present a series of studies suggesting that while people’s judgments of moral wrongness depend overwhelmingly on an assessment of intent, their judgments of deserved punishment exhibit substantial reliance on accidental outcomes as well. This pattern of behavior is present at an early age and consistent across both survey-based and behavioral economic paradigms. These findings raise a question about the function of our moral psychology: why do we judge moral wrongness and deserved punishment by different standards? Dr. Cushman will present evidence that punishment is sensitive to accidental outcomes in part because it is designed to teach social partners not to engage in harmful behaviors and because teaching on the basis of outcomes is more effective than teaching on the basis of intentions.

* * *

The event will take place in Hauser 104 at Harvard Law School, from 12:00 – 1:00 p.m.  For more information, e-mail salms@law.harvard.edu.

For a sample of related Situationist posts, see “Attributing Blame — from the Baseball Diamond to the War on Terror,” “John Darley on ‘Justice as Intuitions’ – Video,” “The Situation of Punishment in Schools,” Why We Punish,” “Kevin Jon Heller on The Cognitive Psychology of Mens Rea,” Mark Lanier visits Professor Jon Hanson’s Tort Class (web cast),” and “Situationist Torts – Abstract.”

Posted in Abstracts, Law, Legal Theory, Morality, Philosophy | Tagged: , , | 1 Comment »

Will Wilkinson Interviews Jonathan Haidt

Posted by The Situationist Staff on July 20, 2008

Below is a ten-minute BloggingHeads clip from a one-hour interview of social psychologist Jonathan Haidt.

To watch the entire video, click here. For a sample of related Situationist posts, see “The Motivated Situation of Morality,” Jonathan Haidt on the Situation of Moral Reasoning,” and “Moral Psychology Primer.”

Posted in Ideology, Morality, Video | Tagged: , , , , , , | 2 Comments »

Smart People Thinking about People Thinking about People Thinking

Posted by The Situationist Staff on July 7, 2008

Anne Trafton in MIT’s news office has a great summary of the fascinating research (and background) of MIT’s Rebecca Saxe.

* * *

How do we know what other people are thinking? How do we judge them, and what happens in our brains when we do?

MIT neuroscientist Rebecca Saxe is tackling those tough questions and many others. Her goal is no less than understanding how the brain gives rise to the abilities that make us uniquely human–making moral judgments, developing belief systems and understanding language.

It’s a huge task, but “different chunks of it can be bitten off in different ways,” she says.

Saxe, who joined MIT’s faculty in 2006 as an assistant professor of brain and cognitive sciences, specializes in social cognition–how people interpret other people’s thoughts. It’s a difficult subject to get at, since people’s thoughts and beliefs can’t be observed directly.

“These are extremely abstract kinds of concepts, although we use them fluently and constantly to get around in the world,” says Saxe.

While it’s impossible to observe thoughts directly, it is possible to measure which brain regions are active while people are thinking about certain things. Saxe probes the brain circuits underlying human thought with a technique called functional magnetic resonance imaging (fMRI), a type of brain scan that measures blood

flow.

Using fMRI, she has identified an area of the brain (the temporoparietal junction) that lights up when people think about other people’s thoughts, something we do often as we try to figure out why others behave as they do.

That finding is “one of the most astonishing discoveries in the field of human cognitive neuroscience,” says Nancy Kanwisher, the Ellen Swallow Richards Professor of Brain and Cognitive Sciences at MIT and Saxe’s PhD thesis adviser.

“We already knew that some parts of the brain are involved in specific aspects of perception and motor control, but many doubted that an abstract high-level cognitive process like understanding another person’s thoughts would be conducted in its own private patch of cortex,” Kanwisher says.

* * *

Because fMRI reveals brain activity indirectly, by monitoring blood flow rather than the firing of neurons, it is considered a fairly rough tool for studying cognition. However, it still offers an invaluable approach for neuroscientists, Saxe says.

More precise techniques, such as recording activity from single neurons, can’t be used in humans because they are too invasive. fMRI gives a general snapshot of brain activity, offering insight into what parts of the brain are involved in complex cognitive activities.

Saxe’s recent studies use fMRI to delve into moral judgment–specifically, what happens in the brain when people judge whether others are behaving morally. Subjects in her studies make decisions regarding classic morality scenarios such as whether it’s OK to flip a switch that would divert a runaway train onto a track where it would kill one person instead of five people.

Judging others’ behavior in such situations turns out to be a complex process that depends on more than just the outcome of an event, says Saxe.

“Two events with the exact same outcome get extremely different reactions based on our inferences of someone’s mental state and what they were thinking,” she says.

For example, judgments often depend on whether the judging person is in conflict with the person performing the action. When a soldier sets off a bomb, an observer’s perception of whether the soldier intended to kill civilians depends on whether the soldier and observer are on the same side of the conflict.

* * *

Saxe earned her PhD from MIT in 2003, and recently her first graduate student, Liane Young, successfully defended her PhD thesis. That extends a direct line of female brain and cognitive scientists at MIT that started with Molly Potter, professor of psychology, who advised Kanwisher.

“It is thrilling to see this line of four generations of female scientists,” Kanwisher says.

Saxe, a native of Toronto, says she wanted to be a scientist from a young age, inspired by two older cousins who were biochemists.

At first, “I wanted to be a geneticist because I thought it was so cool that you could make life out of chemicals. You start with molecules and you make a person. I thought that was mind-blowing,” she says.

She was eventually drawn to neuroscience because she wanted to explore big questions, such as how the brain gives rise to the mind.

She says that approach places her right where she wants to be in the continuum of scientific study, which ranges from tiny systems such as a cell-signaling pathway, to entire human societies. At each level, there is a tradeoff between the size of the questions you can ask and the concreteness of answers you can get, Saxe says.

“I’m doing this because I want to pursue these more-abstract questions, maybe at the cost of never finding out the answers,” she says.

Posted in Education, Morality, Neuroscience | Tagged: , , , , , , | 2 Comments »

Moral Psychology Primer

Posted by The Situationist Staff on May 27, 2008

Dan Jones has a terrific article in the April issue of Prospect, titled “The Emerging Moral Psychology.” We’ve included some excerpts from the article below.
* * *

Long thought to be a topic of enquiry within the humanities, the nature of human morality is increasingly being scrutinised by the natural sciences. This shift is now beginning to provide impressive intellectual returns on investment. Philosophers, psychologists, neuroscientists, economists, primatologists and anthropologists, all borrowing liberally from each others’ insights, are putting together a novel picture of morality—a trend that University of Virginia psychologist Jonathan Haidt has described as the “new synthesis in moral psychology.” The picture emerging shows the moral sense to be the product of biologically evolved and culturally sensitive brain systems that together make up the human “moral faculty.”

A pillar of the new synthesis is a renewed appreciation of the powerful role played by intuitions in producing our ethical judgements. Our moral intuitions, argue Haidt and other psychologists, derive not from our powers of reasoning, but from an evolved and innate suite of “affective” systems that generate “hot” flashes of feelings when we are confronted with a putative moral violation.

This intuitionist perspective marks a sharp break from traditional “rationalist” approaches in moral psychology, which gained a large following in the second half of the 20th century under the stewardship of the late Harvard psychologist Lawrence Kohlberg. In the Kohlbergian tradition, moral verdicts derive from the application of conscious reasoning, and moral development throughout our lives reflects our improved ability to articulate sound reasons for the verdicts . . . .

But experimental studies give cause to question the primacy of rationality in morality. In one experiment, Jonathan Haidt presented people with a range of peculiar stories, each of which depicted behaviour that was harmless (in that no sentient being was hurt) but which also felt “bad” or “wrong.” One involved a son who promised his mother, while she was on her deathbed, that he would visit her grave every week, and then reneged on his commitment because he was busy. Another scenario told of a man buying a dead chicken at the supermarket and then having sex with it before cooking and eating it. These weird but essentially harmless acts were, nonetheless, by and large deemed to be immoral.

Further evidence that emotions are in the driving seat of morality surfaces when people are probed on why they take their particular moral positions. In a separate study which asked subjects for their ethical views on consensual incest, most people intuitively felt that incestuous sex is wrong, but when asked why, many gave up, saying, “I just know it’s wrong!”—a phenomenon Haidt calls “moral dumbfounding.”

It’s hard to argue that people are rationally working their way to moral judgements when they can’t come up with any compelling reasons—or sometimes any reasons at all—for their moral verdicts. Haidt suggests that the judgements are based on intuitive, emotional responses, and that conscious reasoning comes into its own in creating post hoc justifications for our moral stances. Our powers of reason, in this view, operate more like a lawyer hired to defend a client than a disinterested scientist searching for the truth.

Our rational and rhetorical skill is also recruited from time to time as a lobbyist. Haidt points out that the reasons—whether good or bad—that we offer for our moral views often function to press the emotional buttons of those we wish to bring around to our way of thinking. So even when explicit reasons appear to have the effect of changing people’s moral opinions, the effect may have less to do with the logic of the arguments than their power to elicit the right emotional responses. We may win hearts without necessarily converting minds. . . .

Even if you recognise the tendency to base moral judgements on how moral violations make you feel, you probably would also like to think that you have some capacity to think through moral issues, to weigh up alternative outcomes and make a call on what is right and wrong.

Thankfully, neuroscience gives some cause for optimism. Philosopher-cum-cognitive scientist Joshua Greene of Harvard University and his colleagues have used functional magnetic resonance imaging to map the brain as it churns over moral problems, inspired by a classic pair of dilemmas from the annals of moral philosophy called the Trolley Problem and the Footbridge Problem. [For a review of Greene’s research, clickFootbridge Problem - Image by Isdky (Flickr) here.]

* * *

What is going on in the brain when people mull over these different scenarios? Thinking through cases like the Trolley Problem—what Greene calls an impersonal moral dilemma as it involves no direct violence against another person—increases activity in brain regions located in the prefrontal cortex that are associated with deliberative reasoning and cognitive control (so-called executive functions). This pattern of activity suggests that impersonal moral dilemmas such as the Trolley Problem are treated as straightforward rational problems: how to maximise the number of lives saved. By contrast, brain imaging of the Footbridge Problem—a personal dilemma that invokes up-close and personal violence—tells a rather different story. Along with the brain regions activated in the Trolley Problem, areas known to process negative emotional responses also crank up their activity. In these more difficult dilemmas, people take much longer to make a decision and their brains show patterns of activity indicating increased emotional and cognitive conflict within the brain as the two appalling options are weighed up.

Greene interprets these different activation patterns, and the relative difficulty of making a choice in the Footbridge Problem, as the sign of conflict within the brain. On the one hand is a negative emotional response elicited by the prospect of pushing a man to his death saying “Don’t do it!”; on the other, cognitive elements saying “Save as many people as possible and push the man!” For most people thinking about the Footbridge Problem, emotion wins out; in a minority of others, the utilitarian conclusion of maximising the number of lives saved.

* * *

While there is a growing consensus that the moral intuitions revealed by moral dilemmas such as the Trolley and Footbridge problems draw on unconscious psychological processes, there is an emerging debate about how best to characterise these unconscious elements.

On the one hand is the dual-processing view, in which “hot” affectively-laden intuitions that militate against personal violence are sometimes pitted against the ethical conclusions of deliberative, rational systems. An alternative perspective that is gaining increased attention sees our moral intuitions as driven by “cooler,” non-affective general “principles” that are innately built into the human moral faculty and that we unconsciously follow when assessing social behaviour.

In order to find out whether such principles drive moral judgements, scientists need to know how people actually judge a range of moral dilemmas. In recent years, Marc Hauser, a biologist and psychologist at Harvard, has been heading up the Moral Sense Test (MST) project to gather just this sort of data from around the globe and across cultures.

The project is casting its net as wide as possible: the MST can be taken by anyone with access to the internet. Visitors to the “online lab” are presented with a series of short moral scenarios—subtle variations of the original Footbridge and Trolley dilemmas, as well as a variety of other moral dilemmas. The scenarios are designed to explore whether, and how, specific factors influence moral judgements. Data from 5,000 MST participants showed that people appear to follow a moral code prescribed by three principles:

• The action principle: harm caused by action is morally worse than equivalent harm caused by omission.

• The intention principle: harm intended as the means to a goal is morally worse than equivalent harm foreseen as the side-effect of a goal.

• The contact principle: using physical contact to cause harm to a victim is morally worse than causing equivalent harm to a victim without using physical contact.

Crucially, the researchers also asked participants to justify their decisions. Most people appealed to the action and contact principles; only a small minority explicitly referred to the intention principle. Hauser and colleagues interpret this as evidence that some principles that guide our moral judgments are simply not available to, and certainly not the product of, conscious reasoning. These principles, it is proposed, are an innate and universal part of the human moral faculty, guiding us in ways we are unaware of. In a (less elegant) reformulation of Pascal’s famous claim that “The heart has reasons that reason does not know,” we might say “The moral faculty has principles that reason does not know.”

The notion that our judgements of moral situations are driven by principles of which we are not cognisant will no doubt strike many as implausible. Proponents of the “innate principles” perspective, however, can draw succour from the influential Chomskyan idea that humans are equipped with an innate and universal grammar for language as part of their basic design spec. In everyday conversation, we effortlessly decode a stream of noise into meaningful sentences according to rules that most of us are unaware of, and use these same rules to produce meaningful phrases of our own. Any adult with normal linguistic competence can rapidly decide whether an utterance or sentence is grammatically valid or not without conscious recourse to the specific rules that determine grammaticality. Just as we intuitively know what we can and cannot say, so too might we have an intuitive appreciation of what is morally permissible and what is forbidden.

Marc Hauser and legal theorist John Mikhail of Georgetown University have started to develop detailed models of what such an “innate moral grammar” might look like. Such models usually posit a number of key components, or psychological systems. One system uses “conversion rules” to break down observed (or imagined) behaviour into a meaningful set of actions, which is then used to create a “structural description” of the events. This structural description captures not only the causal and temporal sequence of events (what happened and when), but also intentional aspects of action (was the outcome intended as a means or a side effect? What was the intention behind the action?).

With the structural description in place, the causal and intentional aspects of events can be compared with a database of unconscious rules, such as “harm intended as a means to an end is morally worse than equivalent harm foreseen as the side-effect of a goal.” If the events involve harm caused as a means to the Morality - Image by Joel Duggan, Flickrgreater good (and particularly if caused by the action and direct contact of another person), then a judgement of impermissibility is more likely to be generated by the moral faculty. In the most radical models of the moral grammar, judgements of permissibility and impermissibility occur prior to any emotional response. Rather than driving moral judgements, emotions in this view arise as a by-product of unconsciously reached judgements as to what is morally right and wrong

Hauser argues that a similar “principles and parameters” model of moral judgement could help make sense of universal themes in human morality as well as differences across cultures (see below). There is little evidence about how innate principles are affected by culture, but Hauser has some expectations as to what might be found. If the intention principle is really an innate part of the moral faculty, then its operation should be seen in all cultures. However, cultures might vary in how much harm as a means to a goal they typically tolerate, which in turn could reflect how extensively that culture sanctions means-based harm such as infanticide (deliberately killing one child so that others may flourish, for example). These intriguing though speculative ideas await a thorough empirical test.

* * *

Although current studies have only begun to scratch the surface, the take-home message is clear: intuitions that function below the radar of consciousness are most often the wellsprings of our moral judgements. . . .

Despite the knocking it has received, reason is clearly not entirely impotent in the moral domain. We can reflect on our moral positions and, with a bit of effort, potentially revise them. An understanding of our moral intuitions, and the unconscious forces that fuel them, give us perhaps the greatest hope of overcoming them.

* * *

To read the entire article, click here. To reaad some related Situationist posts, see “Quick Introduction to Experimental (Situationist?) Philosophy,” and “Pinker on the Situation of Morality.”

Posted in Ideology, Morality, Neuroscience, Philosophy | Tagged: , , , , , , , , , , , , | 5 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 874 other followers

%d bloggers like this: