Jeffrey Rosen has a fine article in today’s New York Times Magazine summarizing some of the cognitive neuroscience research — and the complex puzzles created by that research for law and legal theory.
For those with the time, we recommend the entire piece — for the rest of you, we have posted some of the most intriguing excerpts.
* * *
“To a neuroscientist, you are your brain; nothing causes your behavior other than the operations of your brain,” [Joshua] Greene says. “If that’s right, it radically changes the way we think about the law. The official line in the law is all that matters is whether you’re rational, but you can have someone who is totally rational but whose strings are being pulled by something beyond his control.” In other words, even someone who has the illusion of making a free and rational choice between soup and salad may be deluding himself, since the choice of salad over soup is ultimately predestined by forces hard-wired in his brain. Greene insists that this insight means that the criminal-justice system should abandon the idea of retribution — the idea that bad people should be punished because they have freely chosen to act immorally — which has been the focus of American criminal law since the 1970s, when rehabilitation went out of fashion. Instead, Greene says, the law should focus on deterring future harms. In some cases, he supposes, this might mean lighter punishments. “If it’s really true that we don’t get any prevention bang from our punishment buck when we punish that person, then it’s not worth punishing that person,” he says. (On the other hand, Carter Snead, the Notre Dame scholar, maintains that capital defendants who are not considered fully blameworthy under current rules could be executed more readily under a system that focused on preventing future harms.)
“You can have a horrendously damaged brain where someone knows the difference between right and wrong but nonetheless can’t control their behavior,” says Robert Sapolsky, a neurobiologist at Stanford. “At that point, you’re dealing with a broken machine, and concepts like punishment and evil and sin become utterly irrelevant. Does that mean the person should be dumped back on the street? Absolutely not. You have a car with the brakes not working, and it shouldn’t be allowed to be near anyone it can hurt.”
Even as these debates continue, some skeptics contend that both the hopes and fears attached to neurolaw are overblown. “There’s nothing new about the neuroscience ideas of responsibility; it’s just another material, causal explanation of human behavior,” says Stephen J. Morse, professor of law and psychiatry at the University of Pennsylvania. “How is this different than the Chicago school of sociology,” which tried to explain human behavior in terms of environment and social structures? “How is it different from genetic explanations or psychological explanations? The only thing different about neuroscience is that we have prettier pictures and it appears more scientific.”
Morse insists that “brains do not commit crimes; people commit crimes” — a conclusion he suggests has been ignored by advocates who, “infected and inflamed by stunning advances in our understanding of the brain . . . all too often make moral and legal claims that the new neuroscience . . . cannot sustain.” He calls this “brain overclaim syndrome” and cites as an example the neuroscience briefs filed in the Supreme Court case Roper v. Simmons to question the juvenile death penalty. “What did the neuroscience add?” he asks. If adolescent brains caused all adolescent behavior, “we would expect the rates of homicide to be the same for 16- and 17-year-olds everywhere in the world — their brains are alike — but in fact, the homicide rates of Danish and Finnish youths are very different than American youths.” Morse agrees that our brains bring about our behavior — “I’m a thoroughgoing materialist, who believes that all mental and behavioral activity is the causal product of physical events in the brain” — but he disagrees that the law should excuse certain kinds of criminal conduct as a result. “It’s a total non sequitur,” he says. “So what if there’s biological causation? Causation can’t be an excuse for someone who believes that responsibility is possible. Since all behavior is caused, this would mean all behavior has to be excused.”
Still, Morse concedes that there are circumstances under which new discoveries from neuroscience could challenge the legal system at its core. “Suppose neuroscience could reveal that reason actually plays no role in determining human behavior,” he suggests tantalizingly. “Suppose I could show you that your intentions and your reasons for your actions are post hoc rationalizations that somehow your brain generates to explain to you what your brain has already done” without your conscious participation. If neuroscience could reveal us to be automatons in this respect, Morse is prepared to agree with Greene and Cohen that criminal law would have to abandon its current ideas about responsibility and seek other ways of protecting society.
Some scientists are already pushing in this direction. In a series of famous experiments in the 1970s and ’80s, Benjamin Libet measured people’s brain activity while telling them to move their fingers whenever they felt like it. Libet detected brain activity suggesting a readiness to move the finger half a second before the actual movement and about 400 milliseconds before people became aware of their conscious intention to move their finger. Libet argued that this leaves 100 milliseconds for the conscious self to veto the brain’s unconscious decision, or to give way to it — suggesting, in the words of the neuroscientist Vilayanur S. Ramachandran, that we have not free will but “free won’t.”
Morse is not convinced that the Libet experiments reveal us to be helpless automatons. But he does think that the study of our decision-making powers could bear some fruit for the law. “I’m interested,” he says, “in people who suffer from drug addictions, psychopaths and people who have intermittent explosive disorder — that’s people who have no general rationality problem other than they just go off.” In other words, Morse wants to identify the neural triggers that make people go postal. “Suppose we could show that the higher deliberative centers in the brain seem to be disabled in these cases,” he says. “If these are people who cannot control episodes of gross irrationality, we’ve learned something that might be relevant to the legal ascription of responsibility.” That doesn’t mean they would be let off the hook, he emphasizes: “You could give people a prison sentence and an opportunity to get fixed.”
The experiments, conducted by Elizabeth Phelps, who teaches psychology at New York University, combine brain scans with a behavioral test known as the Implicit Association Test, or I.A.T., as well as physiological tests of the startle reflex. The I.A.T. flashes pictures of black and white faces at you and asks you to associate various adjectives with the faces. Repeated tests have shown that white subjects take longer to respond when they’re asked to associate black faces with positive adjectives and white faces with negative adjectives than vice versa, and this is said to be an implicit measure of unconscious racism. Phelps and her colleagues added neurological evidence to this insight by scanning the brains and testing the startle reflexes of white undergraduates at Yale before they took the I.A.T. She found that the subjects who showed the most unconscious bias on the I.A.T. also had the highest activation in their amygdalas — a center of threat perception — when unfamiliar black faces were flashed at them in the scanner. By contrast, when subjects were shown pictures of familiar black and white figures — like Denzel Washington, Martin Luther King Jr. and Conan O’Brien — there was no jump in amygdala activity.
The legal implications of the new experiments involving bias and neuroscience are hotly disputed. Mahzarin R. Banaji, a psychology professor at Harvard [and Situationist Contributor] who helped to pioneer the I.A.T., has argued that there may be a big gap between the concept of intentional bias embedded in law and the reality of unconscious racism revealed by science. When the gap is “substantial,” she and the U.C.L.A. law professor [and Situationist Contributor] Jerry Kang have argued, “the law should be changed to comport with science” — relaxing, for example, the current focus on intentional discrimination and trying to root out unconscious bias in the workplace with “structural interventions,” which critics say may be tantamount to racial quotas. One legal scholar has cited Phelps’s work to argue for the elimination of peremptory challenges to prospective jurors — if most whites are unconsciously racist, the argument goes, then any decision to strike a black juror must be infected with racism. Much to her displeasure, Phelps’s work has been cited by a journalist to suggest that a white cop who accidentally shot a black teenager on a Brooklyn rooftop in 2004 must have been responding to a hard-wired fear of unfamiliar black faces — a version of the amygdala made me do it.
Phelps herself says it’s “crazy” to link her work to cops who shoot on the job and insists that it is too early to use her research in the courtroom. “Part of my discomfort is that we haven’t linked what we see in the amygdala or any other region of the brain with an activity outside the magnet that we would call racism,” she told me. “We have no evidence whatsoever that activity in the brain is more predictive of things we care about in the courtroom than the behaviors themselves that we correlate with brain function.” In other words, just because you have a biased reaction to a photograph doesn’t mean you’ll act on those biases in the workplace. Phelps is also concerned that jurors might be unduly influenced by attention-grabbing pictures of brain scans. “Frank Keil, a psychologist at Yale, has done research suggesting that when you have a picture of a mechanism, you have a tendency to overestimate how much you understand the mechanism,” she told me. Defense lawyers confirm this phenomenon. “Here was this nice color image we could enlarge, that the medical expert could point to,” Christopher Plourd, a San Diego criminal defense lawyer, told The Los Angeles Times in the early 1990s. “It documented that this guy had a rotten spot in his brain. The jury glommed onto that.”
Other scholars are even sharper critics of efforts to use scientific experiments about unconscious bias to transform the law. “I regard that as an extraordinary claim that you could screen potential jurors or judges for bias; it’s mind-boggling,” I was told by Philip Tetlock, professor at the Haas School of Business at the University of California at Berkley. Tetlock has argued that split-second associations between images of African-Americans and negative adjectives may reflect “simple awareness of the social reality” that “some groups are more disadvantaged than others.” He has also written that, according to psychologists, “there is virtually no published research showing a systematic link between racist attitudes, overt or subconscious, and real-world discrimination.” (A few studies show, Tetlock acknowledges, that openly biased white people sometimes sit closer to whites than blacks in experiments that simulate job hiring and promotion.) “A light bulb going off in your brain means nothing unless it’s correlated with a particular output, and the brain-scan stuff, heaven help us, we have barely linked that with anything,” agrees Tetlock’s co-author, Amy Wax of the University of Pennsylvania Law School. “The claim that homeless people light up your amygdala more and your frontal cortex less and we can infer that you will systematically dehumanize homeless people — that’s piffle.”
* * *
As the new technologies proliferate, even the neurolaw experts themselves have only begun to think about the questions that lie ahead. Can the police get a search warrant for someone’s brain? Should the Fourth Amendment protect our minds in the same way that it protects our houses? Can courts order tests of suspects’ memories to determine whether they are gang members or police informers, or would this violate the Fifth Amendment’s ban on compulsory self-incrimination? Would punishing people for their thoughts rather than for their actions violate the Eighth Amendment’s ban on cruel and unusual punishment? However astonishing our machines may become, they cannot tell us how to answer these perplexing questions. We must instead look to our own powers of reasoning and intuition, relatively primitive as they may be. As Stephen Morse puts it, neuroscience itself can never identify the mysterious point at which people should be excused from responsibility for their actions because they are not able, in some sense, to control themselves. That question, he suggests, is “moral and ultimately legal,” and it must be answered not in laboratories but in courtrooms and legislatures. In other words, we must answer it ourselves.