When: Monday 10/21/13 12-1pm
Where: WCC 1010
Professor Jon Hanson will kick off this year’s SALMS speaker series, discussing the significance of mind sciences for law.
Lunch will be provided.
Posted by The Situationist Staff on October 20, 2013
When: Monday 10/21/13 12-1pm
Where: WCC 1010
Professor Jon Hanson will kick off this year’s SALMS speaker series, discussing the significance of mind sciences for law.
Lunch will be provided.
Posted by Fábio Almeida on October 15, 2013
Some recent discoveries in evolutionary biology, ethology, neurology, cognitive psychology and behavioral economics impels us to rethink the very foundations of law if we want to answer many questions remain unanswered in legal theory. Where does our ability to interpret rules and think in terms of fairness in relation to others come from? Does the ability to reason about norms derive from certain aspects of our innate rationality and from mechanisms that were sculptured in our moral psychology by evolutionary processes?
Legal theory must take the complexity of the human mind into account
Any answer to these foundational issues demands us to take into consideration what these other sciences are discovering about how we behave. For instance, ethology has shown that many moral behaviors we usually think that are uniquely displayed by our species have been identified in other species as well.
Please watch this video, a lecture by primatologist Frans de Waal for the TED Talks :
The skills needed to feel empathy, to engage in mutual cooperation, to react to certain injustices, to form coalitions, to share, to punish those who refuse to comply with expected behaviors, among many others – abilities once considered to be exclusive of humans – have been observed in other animals. These traits have been observed in many animal species, especially those closer to our evolutionary lineage, as the great apes. In the human case, these instinctive elements are also present. Even small children around the age of one year old show great capacity for moral cognition. They know to identify patterns of relationships in distributive justice, even if they cannot explain why they came to a certain conclusion (because they even do not know how to speak by that age!).
In addition, several studies have shown that certain neural connections in our brains are actively involved in processing information related to capabilities typical of normative behavior. Think about the ability to empathize, for example. It is an essential skill that prevents us to see other people as things or means. Empathy is needed to respect the Kantian categorical imperative to treat the others as an end in themselves, and not means to achieve other ends. This is something many psychopaths can’t do, because they face severe reduction in their ability to empathize with others. Several researches using fMRI have shown year after year that many diagnosed psychopaths show deficiencies in areas of their brains that have been associated to empathy.
If this sounds like science fiction, please consider the following cases.
A 40 year old man, who had hitherto displayed absolutely normal sexual behavior, was kicked out by his wife after she discovered what he was visiting child porn sites and had even tried to sexually molest children. He was arrested and the judge determined that he would have to pass through a sexaholics rehabilitation program or face jail. But he soon got expelled from the program after inviting women at the program to have sex with him. Just before being arrested again for failing in the program, he felt a severe headache and went to a hospital, where he was submitted to an MRI exam. The doctors identified a tumor on his orbifrontal cortex, a brain region usually associated with training of moral judgment, impulse control and regulation of social behavior. After the removal of the tumor, his behavior returned to normal. Seven months later, he once more showed deviant behavior – and further tests showed the reappearance of the tumor. After the removal of the new cyst, his sexual behavior again returned to normal standards.
You could also consider the case of Charles Whitman. Until he was 24, he had been a reasonably normal person. However, on August 1st, 1966, he ascended to the top of the Tower of the University of Texas, where, armed to the teeth, he killed 13 people and wounded 32 before being killed by the police. Later it was discovered that just before the mass killings, he had also murdered both his wife and mother. During the previous day, he left a typewritten letter in which one could read the following:
“I do not quite understand what it is that compels me to type this letter. Perhaps it is to leave some vague reason for the actions I have recently performed. I do not really understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I cannot recall when it started) I have been a victim of many unusual and irrational thoughts.”
In the letter, he also requested to be submitted to an autopsy after his death in order to verify if it there was something wrong with his brain. Whitman’s brain was examined and … surprise! … the doctors found a glioblastoma tumor compressing the region of his amygdala, which is associated with the regulation of aggression and fear.
What does this mean for legal theory? At least this means that law, so far, has been based on a false metaphysical conception that t brain is a lockean blank slate and that our actions derive from our rational dispositions. Criminal law theory assumes that an offender breaks the law exclusively due to his free will and reasoning. Private law assumes that people sign contracts only after considering all its possible legal effects and are fully conscious about the reasons that motivated them to do so. Constitutional theory assumes that everyone is endowed with a rational disposition that enables the free exercise of civil and constitutional rights such as freedom of expression or freedom of religion. It is not in question that we are able to exercise such rights. But these examples show that the capacity to interpret norms and to act accordingly to the law does not derive from a blank slate endowed with free will and rationality, but from a complex mind that evolved in our hominin lineage and that relies on brain structures that enables us to reason and choose among alternatives.
This means that our rationality is not perfect. It is not only affected by tumors, but also by various cognitive biases that affect the rationality of our decisions. Since the 1970s, psychologists have studied these biases. Daniel Kahneman, for example, won the 2002 Nobel prize in Economic Sciences for his research on the impact of these biases on decision-making. We can make really irrational decisions because our mind is based on certain heuristics (fast-and-frugal rules) to evaluate certain situations. In most situations, these heuristics help us to make the right decisions, but they also may influence us to make really dumb mistakes.
There are dozens of heuristics that structure our rationality. We are terrible on assessing the significance of statistical correlations, we discard unfavorable evidence, we tend to follow the most common behavior in our group (herd effect), and we tend to see past events as if they had been easily predictable. We are inclined to cooperate with whom is part of our group (parochialist bias), but not so with whom belongs to another group. And those are just some of the biases that have been already identified.
It is really hard to overcome these biases, because they are much of what we call rationality. These flaws are an unavoidable part of our rationality. Sure, with some effort, we can avoid many mistakes by using some techniques that could lead us to get unbiased and correct answers. However, using artificial techniques to do so may be expensive and demands lots of effort. We can use a computer and train mathematical skills in order to overcome biases that causes error in statistical evaluation, for instance. But how can we use a computer to reason about morality or legal issues “getting around” these psychological biases? Probably, we can’t.
The best we can do is to reconsider the psychological assumptions of legal theory, by taking into account what we actually know about our psychology and how it affects our judgement. And there is evidence that these biases really influence how judges evaluate judicial cases. For instance, a research done by Birte Englich, Thomas Mussweiler and Fritz Strack concluded that even legal experts are indeed affected by cognitive biases. More specifically, they studied the effects of anchoring bias in judicial activity, by submitting 52 legal experts to the following experiment: they required them to examine an hypothetical court case, which should determine the sentence in a fictitious shoplifting case. After reading the materials, the participants had to answer a questionnaire at the end of which they would define the sentence.
Before answering the questions, however, the participants should throw a pair of dice in order to determine the prosecutor’s demand. Half of the dice were loaded in order to show always the numbers 1 and 2. And the other half was loaded in order to indicate 3 and 6. The sum of the numbers should indicate the prosecutor’s sentencing demand. Afterwards, they should answer questions about legal issues concerning the case, including the sentencing decision. The researchers found that the results of the dice had an actual impact on their proposed sentence: the average penalty imposed by judges who had dice with superior results (3 + 6 = 9) was 7.81 months in prison, while the participants whose dice resulted in lower values (1 +2 = 3) , proposed an average punishment of 5.28 months .
In another study, it was found that, on average, tired and hungry judges end up taking the easy decision to deny parole rather than to grant it. In the study, conducted in Israel, researchers divided the day’s schedule of judges into three sessions. At the beginning of which of them, the participants could rest and eat. It turned out that, soon after eating and resting, judges authorized the parole in 65% of cases. At the end of each session, the rate fell to almost zero. Okay, this is not really a cognitive bias, but a factual condition – however, it shows that a tired mind and energy needs can induce decisions that almost everyone would consider as intrinsically unfair.
And so on. Study after study , research shows that (1) our ability to develop moral reasoning is innate, (2) our mind is filled with innate biases that are needed to process cultural information in relation to compliance with moral/legal norms, and (3) these biases affect our rationality.
These researches raise many questions that will have to be faced sooner or later by legal scholars. Would anyone say that due process of law is respected when judges anchors judicial decision in completely external factors – factors about which they aren’t even aware of! Of course, this experiment was done in a controlled experiment and nobody expects that a judge rolls dice before judging a case. But judge might be influenced by other anchors as well, such as numbers inside a clock, a date on the calendar, or a number printed on a dollar banknote? Or would anyone consider due process was respected even if a parole hadn’t been granted because the case was judged late in the morning? These external elements decisively influenced the judicial outcome, but none of them were mentioned in the decision.
Legal theory needs to incorporate this knowledge on its structure. We need to build institutions capable to take biases into account and, as far as possible, try to circumvent them or, at least, diminish their influence. For instance, by knowing that judges tend to get impatient and harsher against defendants when they are hungry and tired, a Court could force him to take a 30 minute break after 3 hours of work in order to restore their capacity to be as impartial as possible. This is just a small suggestion about how institutions could respond to these discoveries.
Of course, there are more complex cases, such as the discussion about criminals who always had displayed good behavior, but who were misfortunate to develop a brain tumor that influenced the commitment of a crime. Criminal theory is based on the thesis that the agent must intentionally engage in criminal conduct. But is it is possible to talk about intention when a tumor was one direct cause of the result? And if it hadn’t been a tumor, but a brain malformation (as it occurs in many cases of psychopathy)? Saying that criminal law could already solve these cases by considering that the criminal had no responsibility due to his condition wouldn’t solve the problem, because the issue is in the very concept of intention that is assumed in legal theory.
And this problem expands into the rest of the legal theory. We must take into account the role of cognitive biases in consumer relations. The law has not realized the role of these biases in decision making, but many companies are aware of them. How many times haven’t you bought a 750 ml soda for $2.00 just because it cost $0.20 more than a 500 ml one? Possibly, you thought that you payed less per ml than you would pay if you had bought the smaller size. But … you really wanted was 500 ml, and would pay less than you payed for taking extra soda that you didn’t want! In other words, the company just explores a particular bias that affects most people, in order to induce them to buy more of its products. Another example: for evolutionary reasons, humans are prone to consume fatty foods and lots of sugar. Companies exploit this fact to their advantage, which ends up generating part of the obesity crisis that we see in the world today. In their defense, companies say that consumers purchased the product on their own. What they do not say, but neurosciences and evolutionary theory say, is that our “free will” has a long evolutionary history that propels us to consume exactly these kinds of food that, over the years, affects our health. And law needs to take these facts into consideration if it wants to adequately protect and enforce consumer rights.
Law is still based on an “agency model” very similar to game theory’s assumption of rationality. But we are not rational. Every decision we make is influenced by the way our mind operates. Can we really think that it is fair to blame someone who committed a crime on the basis of erroneous results generated by a cognitive bias? And, on the other hand, would it be right to exonerate a defendant based on those assumptions? To answer these and other fringes questions, legal scholars must rethink the concept of person assumed by law, taking into account our intrinsic biological nature.
Related Situationist posts:
Image from Flickr
Posted by The Situationist Staff on October 13, 2013
Fábio Portela L. Almeida is a 2003 graduate at Universidade de Brasília Law School in Brazil. After graduating, he worked as a lawyer and, in 2006, he has been working as a Clerk in the Brazilian Superior Court of Labour Law. He also earned a Master of Laws Degree in 2007 at the same university, where he wrote a dissertation about constitutional issues arising from religious teaching in Brazilian public schools, which was published as a book in 2008.
In 2011, he earned a M.Phil Degree at the Universidade de Brasília Department of Philosophy. His dissertation, “The evolution of a normative mind: origins of human cooperation,” awarded the ANPOF Prize of best philosophical dissertation in the biennium 2010/2011. Currently, Fábio is a SJD Candidate at the Universidade de Brasília Law School and a Visiting Researcher at Harvard Law School. His research interests are related to the interdisciplinary relationship between legal theory, biology, psychology, moral philosophy, economics, sociology and anthropology.
In his free time, Fábio enjoys writing about stock investing in his personal blog, listening to classical music, reading, traveling, and watching movies. Fabio is a long-term reader of The Situationist, and we are delighted that he is visiting HLS for the year and contributing to the blog as a fellow. Look for his first post soon.
Posted by The Situationist Staff on October 6, 2013
Situationist Contributor Emily Pronin’s recent articles, When the mind races: Effects of thought speed on feeling and action. Current Directions in Psychological Science, 22, 283–288, was highlighted in a recent APS Observer column. Here is an excerpt containing a helpful overview of Pronin’s fascinating study and findings.
You wake up. Your phone blinks. You touch the screen, slide your finger, and chills shiver down your spine. “See me tomorrow,” says the email your boss sent at midnight. Your thoughts accelerate. “What does she want? Why did she write so late? Am I in trouble? The company is in trouble. This down economy! I’m getting fired. Why me? Where will I work? I have skills. There are other companies. I have no skills. Where will I apply? Can we move? What will my parents think? How will the kids react to changing schools? I can do this. We can do this. No matter what.”
We think. It helps us. Errands, plans, and goals require thought. Synapses fire. Action potentials race down axons. Chemicals bathe our brains with neurotransmitters. Thoughts guide action, from ordering a coffee to avoiding predators. What we think matters. But according to Emily Pronin of Princeton University, how fast we think matters, too.
Making people think fast boosts their happiness, energy, riskiness, and self-confidence. In an impressive program of research, Pronin and colleagues have documented these effects using many ways to speed up thinking. In one study, participants read trivia statements at fast or slow speeds (Chandler & Pronin, 2012). Next, they completed a risk-taking task. Participants could earn money — but only if they didn’t take too many risks. Fast-thinking participants took the most risks and earned the least money. On the bright side, having people read at twice their normal reading speed increased their positive emotion (Pronin & Wegner, 2006).
Pronin (2013) argues that fast thinking prepares people to take immediate action. Feeling good nudges that process along, as does increased energy. If you spy a moose while running on a trail, it will behoove you to take swift and confident action even if it involves some risk. You may even experience an “a-ha” moment that provides a creative solution you would not have considered if you were thinking at a normal or slow pace (Yang & Pronin, 2012).
Read the entire column here.
Image from Flickr.
Other Situationist posts about Emily Pronin’s work:
Posted by The Situationist Staff on October 3, 2013
A few excerpts from an outstanding 1992 New York Times book review by Walter Reich of Christopher Browning’s remarkable book, “Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland“:
We know a lot about how the Germans carried out the Holocaust. We know much less about how they felt and what they thought as they did it, how they were affected by what they did, and what made it possible for them to do it. In fact, we know remarkably little about the ordinary Germans who made the Holocaust happen — not the desk murderers in Berlin, not the Eichmanns and Heydrichs, and not Hitler and Himmler, but the tens of thousands of conscripted soldiers and policemen from all walks of life, many of them middle-aged, who rounded up millions of Jews and methodically shot them, one by one, in forests, ravines and ditches, or stuffed them, one by one, into cattle cars and guarded those cars on their way to the gas chambers.
In his finely focused and stunningly powerful book, “Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland,” Christopher R. Browning tells us about such Germans and helps us understand, better than we did before, not only what they did to make the Holocaust happen but also how they were transformed psychologically from the ordinary men of his title into active participants in the most monstrous crime in human history. In doing so he aims a penetrating searchlight on the human capacity for utmost evil and leaves us staring at his subject matter with the shock of knowledge and the lurking fear of self-recognition.
* * *
In the end, what disturbs the reader more than the policemen’s escape from punishment is their capacity — as the ordinary men they were, as men not much different from those we know or even from ourselves — to kill as they did.
Battalion 101′s killing wasn’t, as Mr. Browning points out, the kind of “battlefield frenzy” occasionally seen in all wars, when soldiers, having faced death, and having seen their friends killed, slaughter enemy prisoners or even civilians. It was, rather, the cold-blooded fulfillment of German national policy, and involved, for the policemen, a process of accommodation to orders that required them to do things they would never have dreamed they would ever do, and to justify their actions, or somehow reinterpret them, so that they would not see themselves as evil people.
Mr. Browning’s meticulous account, and his own acute reflections on the actions of the battalion members, demonstrate the important effect that the situation had on those men: the orders to kill, the pressure to conform, and the fear that if they didn’t kill they might suffer some kind of punishment or, at least, damage to their careers. In fact, the few who tried to avoid killing got away with it; but most believed, or at least could tell themselves, that they had little choice.
But Mr. Browning’s account also illustrates other factors that made it possible for the battalion’s ordinary men not only to kill but, ultimately, to kill in a routine, and in some cases sadistic, way. Each of these factors helped the policemen feel that they were not violating, or violating only because it was necessary, their personal moral codes.
One such factor was the justification for killing provided by the anti-Semitic rationales to which the policemen had been exposed since the rise of Nazism, rationales reinforced by the battalion’s officers. The Jews were presented not only as evil and dangerous but also, in some way, as responsible for the bombing deaths of German women and children. Another factor was the process of dehumanization: abetted by Nazi racial theories that were embraced by policemen who preferred not to see themselves as killers, Jews were seen as less than people, as creatures who could be killed without the qualms that would be provoked in them were they to kill fellow Germans or even Slavs. It was particularly when the German policemen came across German Jews speaking their own language, especially those from their own city, that they felt a human connection that made it harder to kill them.
The policemen were also helped by the practice of trying not to refer to their activities as killing: they were involved in “actions” and “resettlements.” Moreover, the responsibility wasn’t theirs; it belonged to the authorities — Major Trapp as well as, ultimately, the leaders of the German state — whose orders they were merely carrying out. Indeed, whatever responsibility they did have was diffused by dividing the task into parts and by sharing it with other people and processes. It was shared, first of all, by others in the battalion, some of whom provided cordons so that Jews couldn’t escape and some of whom did the shooting. It was shared by the Trawnikis, who were brought in to do the shooting whenever possible so that the battalion could focus on the roundups. And it was shared, most effectively, by the death camps, which made the men’s jobs immensely easier, since stuffing a Jew into a cattle car, though it sealed his fate almost as surely as a neck shot, left the actual killing to a machine-like process that would take place far away, one for which the battalion members didn’t need to feel personally responsible.
CLEARLY, ordinary human beings are capable of following orders of the most terrible kinds. What stands between civilization and genocide is the respect for the rights and lives of all human beings that societies must struggle to protect. Nazi Germany provided the context, ideological as well as psychological, that allowed the policemen’s actions to happen. Only political systems that recognize the worst possibilities in human nature, but that fashion societies that reward the best, can guard the lives and dignity of all their citizens.
* * *
Related Situationist posts:
Posted by The Situationist Staff on September 14, 2013
People who get away with cheating when they believe no one is hurt by their dishonesty are more likely to feel upbeat than remorseful afterward, according to new research published by the American Psychological Association.
Although people predict they will feel bad after cheating or being dishonest, many of them don’t, reports a study published online in APA’s Journal of Personality and Social Psychology.
“When people do something wrong specifically to harm someone else, such as apply an electrical shock, the consistent reaction in previous research has been that they feel bad about their behavior,” said the study’s lead author, Nicole E. Ruedy, of the University of Washington. “Our study reveals people actually may experience a ‘cheater’s high’ after doing something unethical that doesn’t directly harm someone else.”
Even when there was no tangible reward, people who cheated felt better on average than those who didn’t cheat, according to results of several experiments that involved more than 1,000 people in the U.S. and England. A little more than half the study participants were men, with 400 from the general public in their late 20s or early 30s and the rest in their 20s at universities.
Participants predicted that they or someone else who cheated on a test or logged more hours than they had worked to get a bonus would feel bad or ambivalent afterward. When participants actually cheated, they generally got a significant emotional boost instead, according to responses to questionnaires that gauged their feelings before and after several experiments.
In one experiment, participants who cheated on math and logic problems were overall happier afterward than those who didn’t and those who had no opportunity to cheat. The participants took tests on computers in two groups. In one group, when participants completed an answer, they were automatically moved to the next question. In the other group, participants could click a button on the screen to see the correct answer, but they were told to disregard the button and solve the problem on their own. Graders could see who used the correct-answer button and found that 68 percent of the participants in that group did, which the researchers counted as cheating.
People who gained from another person’s misdeeds felt better on average than those who didn’t, another experiment found. Researchers at a London university observed two groups in which each participant solved math puzzles while in a room with another person who was pretending to be a participant. The actual participants were told they would be paid for each puzzle they solved within a time limit and that the other “participant” would grade the test when the time was up. In one group, the actor inflated the participant’s score when reporting it to the experimenter. In the other group, the actor scored the participant accurately. None of the participants in the group with the cheating actor reported the lie, the authors said.
In another trial, researchers asked the participants not to cheat because it would make their responses unreliable, yet those who cheated were more likely to feel more satisfied afterward than those who didn’t. Moreover, the cheaters who were reminded at the end of the test how important it was not to cheat reported feeling even better on average than other cheaters who were not given this message, the authors said. Researchers gave participants a list of anagrams to unscramble and emphasized that they should unscramble them in consecutive order and not move on to the next word until the previous anagram was solved. The third jumble on the list was “unaagt,” which can spell only the word taguan, a species of flying squirrel. Previous testing has shown that the likelihood of someone solving this anagram is minuscule. The graders considered anyone who went beyond the third word to have cheated and found that more than half the participants did, the authors said.
“The good feeling some people get when they cheat may be one reason people are unethical even when the payoff is small,” Ruedy said. “It’s important that we understand how our moral behavior influences our emotions. Future research should examine whether this ‘cheater’s high’ could motivate people to repeat the unethical behavior.”
Article: “The Cheater’s High: The Unexpected Affective Benefits of Unethical Behavior,” Nicole E. Ruedy, PhD, University of Washington; Celia Moore, PhD, London Business School; Francesca Gino, PhD, Harvard University; and Maurice E. Schweitzer, PhD, University of Pennsylvania; Journal of Personality and Social Psychology, online, Sept. 3, 2013.
Related Situationist posts: