Tomorrow (2/16) Daniel Gilbert, Situationist friend, Professor of Psychology at Harvard University, author of Stumbling on Happiness, and host of the PBS television series This Emotional Life, returns to Harvard Law to deliver a talk entitled
“How To Do Precisely the Right Thing At All Possible Times.”
Most experts tell us what to decide but they don’t tell us how. So the moment we face a novel decision—should I move to Cleveland or Anchorage? Marry Jennifer or Joanne? Become an architect or a pastry chef?—we’re lost. Is it possible to do the right thing at all possible times? In fact, there is a simple method for making decisions that most people find easy to understand but impossible to follow. New research in psychology, neuroscience, and behavioral economics explains why.
Having a simple, easy-to-pronounce name is more likely to win you friends and favour in the workplace, a study by Dr Simon Laham at the University of Melbourne and Dr Adam Alter at New York University Stern School of Business, has found.
In the first study of its kind, and published in the Journal of Experimental Social Psychology, researchers analysed how the pronunciation of names can influence impression formation and decision-making. In particular, they demonstrated “the name pronunciation effect,” which occurs when people with easy–to-pronounce names are evaluated more positively than those with difficult-to-pronounce names.
The study revealed that:
People with more pronounceable names were more likely to be favoured for political office and job promotions
Political candidates with easy-to-pronounce names were more likely to win a race than those without, based on a mock ballot study
Attorneys with more pronounceable names rose more quickly to superior positions in their firm hierarchies, based on a field study of 500 first and last names of US lawyers
Lead author, Dr Simon Laham said subtle biases that we are not aware of affect our decisions and choices. “Research findings revealed that the effect is not due merely to the length of a name or how foreign-sounding or unusual it is, but rather how easy it is to pronounce,” he said.
Dr Adam Alter who conducted the law firm analysis said this effect probably also exists in other industries and in many everyday contexts. “People simply aren’t aware of the subtle impact that names can have on their judgments,” Dr Alter said.
Dr Laham said the results had important implications for the management of bias and discrimination in our society.
“It’s important to appreciate the subtle biases that shape our choices and judgments about others. Such an appreciation may help us de-bias our thinking, leading to fairer, more objective treatment of others,” he said.
Researchers conducted studies both in lab settings and in a natural environment using a range of names from Anglo, Asian, and Western and Eastern European backgrounds.
This research builds on Dr Alter’s earlier work, which suggests that financial stocks with simpler names tend to outperform similar stocks with complex names immediately after they appear on the market.
This award is for career research accomplishment or distinguished career contributions in personality psychology and honors an individual who has demonstrated “analytic sophistication, theoretical depth, and wide scholarship.”
Sponsored by SPSP
This award is for career research accomplishment or distinguished career contributions in social psychology and honors an individual who “has contributed and is continuing to contribute to the field of social psychology in significant ways.”
Sponsored by SPSP
New in 2011, this award honors scholars who have made “major theoretical and/or empirical contributions to social psychology and/or personality psychology or to bridging these areas.” Recipients are recognized for distinguished scholarly contributions across productive careers.
Sponsored by SPSP
“Shared social responsibility: A field experiment in pay-what-you-want pricing and charitable giving.” Published in Science in 2010.
This award recognizes a publication “that best explicates social psychological phenomena principally through the use of field research methods and settings and that thereby demonstrates the relevance of the discipline to communities outside of academic social psychology.”
Endowed by FPSP
This award recognizes a mid-career scholar “whose work substantially adds to the body of knowledge” in personality psychology and/or brings together personality psychology and social psychology.
Endowed by FPSP
The 2011 Carol and Ed Diener Award in Social Psychology
This award recognizes a mid-career scholar “whose work substantially adds to the body of knowledge” in social psychology and/or brings together personality psychology and social psychology.
Endowed by FPSP
This award honors a person, normally outside the SPSP community, who has “a sustained and distinguished record for disseminating knowledge in personality or social psychology to the general public through popular media.”
Sponsored by SPSP
The 2011 Media Prize
[Situationist Co-Founders] Jon Hanson and Michael McCann
SPSP’s first Media Prize recipients – This prize recognizes a person, normally outside the SPSP community, providing the best piece or collection of pieces in popular media that represents the contributions of personality or social psychology to the general public in a given calendar year. Sponsored by SPSP
This award, which is presented at the APA Convention, is for “distinguished contributions to the study of lives … in the demanding kind of inquiry pioneered by Henry A. Murray.“
Sponsored by the Society of Personology and SPSP
The 2012 SAGE Young Scholars Awards
To be announced in January
These awards support the research of junior colleagues and recognize “outstanding young researchers” representing the broad spectrum of personality and social psychology research areas.
Sponsored by FPSP with the generous support of SAGE Publications
The 2011 Award for Distinguished Service to the Society
This award ”recognizes distinguished efforts by individuals to benefit the field of social and personality psychology,” including noteworthy efforts to support educational and research activities in the field, professional leadership, and achievements that enhance the reputation of the field.
Sponsored by SPSP
“A metaphor-enriched social cognition.” Published in Psychological Bulletin in 2010.
This prize recognizes “the most theoretically innovative article, book chapter, or unpublished manuscript of the year.” It honors theoretical articles that are especially likely to generate the discovery of new hypotheses, new phenomena, or new ways of thinking about the discipline of social/personality psychology.
Sponsored by SPSP
From Duke Today (a press release about research co-authored by Situationist Contributor Aaron Kay):
The political unrest in the Middle East, which continues today in Syria, raises some intriguing questions: How can we explain the contagion effect of rebellion when revolution spreads from nation to nation? Is it possible to predict whether people will respond to limits on freedom with submission or rebellion?
New research from Duke University and the University of Waterloo to be published in the February edition of the journal Psychological Science finds the certainty of a restriction is significant in determining how people will respond to enforced limitations on freedom.
Across several studies, participants responded to restrictions that were certain to come into effect more favorably and valuing the restricted freedoms less, a form of “rationalization.” Participants responded to identical restrictions that were described as having a small chance of not coming into effect with “reactance,” viewing restrictions less favorably and valuing the restricted freedoms more.
“There have traditionally been two schools of thought on how people react to restrictions on freedoms,” said Gavan Fitzsimons, professor of marketing at Duke’s Fuqua School of Business and one of the study’s authors. “One school of thought says people are likely to react to restrictions with rationalization and a level of acceptance, while a second suggests people are motivated to restore restricted freedoms and will respond negatively on attempts to constrain them. Our research reconciles these two opposing views by considering the restrictions’ degree of absoluteness.”
The study cites several hypothetical situations to explain the varied responses to restrictions on freedom. In one survey, participants read that the government had decided to reduce speed limits after experts concluded lower speed limits in cities increase safety. Some participants were told the new limits would definitely come into effect (an absolute condition), while others were told the limits would come into effect only if a majority of government officials voted to enact it (a non-absolute condition).
Participants in the absolute group tended to rationalize the new restrictions; they reacted with more positive attitudes and lower levels of annoyance toward reduced speed limits. In contrast, participants in the non-absolute group reacted strongly against the limits.
“Our findings have a number of practical applications, potentially shedding some light on the recent string of uprisings in the Middle East,” said co-author Aaron Kay, an associate professor of management and of psychology and neuroscience at Duke.
“To the extent a political regime feels absolute and permanent to its citizens, people will rationalize its actions and decisions, even minimizing the importance of freedoms. But once they learn similar regimes have been toppled and are therefore not as permanent as people once thought, they may become reactant and perhaps motivated to revolt,” he said.
From the APS Monitor (excerpts from a terrific primer on “The Mechanics of Choice”):
* * *
The prediction of social behavior significantly involves the way people make decisions about resources and wealth, so the science of decision making historically was the province of economists. And the basic assumption of economists was always that, when it comes to money, people are essentially rational. It was largely inconceivable that people would make decisions that go against their own interests. Although successive refinements of expected-utility theory made room for individual differences in how probabilities were estimated, the on-the-surface irrational economic behavior of groups and individuals could always be forced to fit some rigid, rational calculation.The problem is — and everything from fluctuations in the stock market to decisions between saving for retirement or purchasing a lottery ticket or a shirt on the sale rack shows it — people just aren’t rational. They systematically make choices that go against what an economist would predict or advocate.Enter a pair of psychological scientists — Daniel Kahneman (currently a professor emeritus at Princeton) and Amos Tversky — who in the 1970s turned the economists’ rational theories on their heads. Kahneman and Tversky’s research on heuristics and biases and their Nobel Prize winning contribution, prospect theory, poured real, irrational, only-human behavior into the calculations, enabling much more powerful prediction of how individuals really choose between risky options.
* * *
Univ. of Toronto psychologist Keith E. Stanovich and James Madison Univ. psychologist Richard F. West refer to these experiential and analytical modes as “System 1” and “System 2,” respectively. Both systems may be involved in making any particular choice — the second system may monitor the quality of the snap, System-1 judgment and adjust a decision accordingly.7 But System 1 will win out when the decider is under time pressure or when his or her System-2 processes are already taxed.
This is not to entirely disparage System-1 thinking, however. Rules of thumb are handy, after all, and for experts in high-stakes domains, it may be the quicker form of risk processing that leads to better real-world choices. In a study by Cornell University psychologist Valerie Reyna and Mayo Clinic physician Farrell J. Lloyd, expert cardiologists took less relevant information into account than younger doctors and medical students did when making decisions to admit or not admit patients with chest pain to the hospital. Experts also tended to process that information in an all-or-none fashion (a patient was either at risk of a heart attack or not) rather than expending time and effort dealing with shades of gray. In other words, the more expertise a doctor has, the more that his or her intuitive sense of the gist of a situation was used as a guide.8
In Reyna’s variant of the dual-system account, fuzzy-trace theory, the quick-decision system focuses on the gist or overall meaning of a problem instead of rationally deliberating on facts and odds of alternative outcomes.9 Because it relies on the late-developing ventromedial and dorsolateral parts of the frontal lobe, this intuitive (but informed) system is the more mature of the two systems used to make decisions involving risks.
A 2004 study by Vassar biopsychologist Abigail A. Baird and Univ. of Waterloo cognitive psychologist Jonathan A. Fugelsang showed that this gist-based system matures later than do other systems. People of different ages were asked to respond quickly to easy, risk-related questions such as “Is it a good idea to set your hair on fire?”, “Is it a good idea to drink Drano?”, and “Is it a good idea to swim with sharks?” They found that young people took about a sixth of a second longer than adults to arrive at the obvious answers (it’s “no” in all three cases, in case you were having trouble deciding).10 The fact that our gist-processing centers don’t fully mature until the 20s in most people may help explain the poor, risky choices younger, less experienced decision makers commonly make.
Adolescents decide to drive fast, have unprotected sex, use drugs, drink, or smoke not simply on impulse but also because their young brains get bogged down in calculating odds. Youth are bombarded by warning statistics intended to set them straight, yet risks of undesirable outcomes from risky activities remain objectively small — smaller than teens may have initially estimated, even — and this may actually encourage young people to take those risks rather than avoid them. Adults, in contrast, make their choices more like expert doctors: going with their guts and making an immediate black/white judgment. They just say no to risky activities because, however objectively unlikely the risks are, there’s too much at stake to warrant even considering them.11
Making Better Choices
The gist of the matter is, though, that none of us, no matter how grown up our frontal lobes, make optimal decisions; if we did, the world would be a better place. So the future of decision science is to take what we’ve learned about heuristics, biases, and System-1 versus System-2 thinking and apply it to the problem of actually improving people’s real-world choices.
One obvious approach is to get people to increase their use of System 2 to temper their emotional, snap judgments. Giving people more time to make decisions and reducing taxing demands on deliberative processing are obvious ways of bringing System 2 more into the act. Katherine L. Milkman (U. Penn.), Dolly Chugh (NYU), and Max H. Bazerman (Harvard) identify several other ways of facilitating System-2 thinking.12 One example is encouraging decision makers to replace their intuitions with formal analysis — taking into account data on all known variables, providing weights to variables, and quantifying the different choices. This method has been shown to significantly improve decisions in contexts like school admissions and hiring.
Having decision makers take an outsider’s perspective on a decision can reduce overconfidence in their knowledge, in their odds of success, and in their time to complete tasks. Encouraging decision makers to consider the opposite of their preferred choice can reduce judgment errors and biases, as can training them in statistical reasoning. Considering multiple options simultaneously rather than separately can optimize outcomes and increase an individual’s willpower in carrying out a choice. Analogical reasoning can reduce System-1 errors by highlighting how a particular task shares underlying principles with another unrelated one, thereby helping people to see past distracting surface details to more fully understand a problem. And decision making by committee rather than individually can improve decisions in group contexts, as can making individuals more accountable for their decisions.13
In some domains, however, a better approach may be to work with, rather than against, our tendency to make decisions based on visceral reactions. In the health arena, this may involve appealing to people’s gist-based thinking. Doctors and the media bombard health consumers with numerical facts and data, yet according to Reyna, patients — like teenagers — tend initially to overestimate their risks; when they learn their risk for a particular disease is actually objectively lower than they thought, they become more complacent — for instance by forgoing screening. Instead, communicating the gist, “You’re at (some) risk, you should get screened because it detects disease early” may be a more powerful motivator to make the right decision than the raw numbers. And when statistics are presented, doing so in easy-to-grasp graphic formats rather than numerically can help patients (as well as physicians, who can be as statistically challenged as most laypeople) extract their own gists from the facts.14
Complacency is a problem when decisions involve issues that feel more remote from our daily lives — problems like global warming. The biggest obstacle to changing people’s individual behavior and collectively changing environmental policy, according to Columbia University decision scientist Elke Weber, is that people just aren’t scared of climate change. Being bombarded by facts and data about perils to come is not the same as having it affect us directly and immediately; in the absence of direct personal experience, our visceral decision system does not kick in to spur us to make better environmental choices such as buying more fuel-efficient vehicles.15
How should scientists and policymakers make climate change more immediate to people? Partly, it involves shifting from facts and data to experiential button-pressing. Powerful images of global warming and its effects can help. Unfortunately, according to research conducted by Yale environmental scientist Anthony A. Leisurowitz, the dominant images of global warming in Americans’ current consciousness are of melting ice and effects on nonhuman nature, not consequences that hit closer to home; as a result, people still think of global warming as only a moderate concern.16
Reframing options in terms that connect tangibly with people’s more immediate priorities, such as the social rules and norms they want to follow, is a way to encourage environmentally sound choices even in the absence of fear.17 For example, a study by Noah J. Goldstein (Univ. of Chicago), Robert B. Cialdini (Arizona State), and Vladas Griskevicius (Univ. of Minnesota) compared the effectiveness of different types of messages in getting hotel guests to reuse their towels rather than send them to the laundry. Messages framed in terms of social norms — “the majority of guests in this room reuse their towels” — were more effective than messages simply emphasizing the environmental benefits of reuse.18
Yet another approach to getting us to make the most beneficial decisions is to appeal to our natural laziness. If there is a default option, most people will accept it because it is easiest to do so — and because they may assume that the default is the best. University of Chicago economist Richard H. Thaler suggests using policy changes to shift default choices in areas like retirement planning. Because it is expressed as normal, most people begin claiming their Social Security benefits as soon as they are eligible, in their early to mid 60s — a symbolic retirement age but not the age at which most people these days are actually retiring. Moving up the “normal” retirement age to 70 — a higher anchor — would encourage people to let their money grow longer untouched.19
* * *
Making Decisions About the Environment
APS Fellow Elke Weber recently had the opportunity to discuss her research with others who share her concern about climate change, including scientists, activists, and the Dalai Lama. Weber . . . shared her research on why people fail to act on environmental problems. According to her, both cognitive and emotional barriers prevent us from acting on environmental problems. Cognitively, for example, a person’s attention is naturally focused on the present to allow for their immediate survival in dangerous surroundings. This present-focused attitude can discourage someone from taking action on long-term challenges such as climate change. Similarly, emotions such as fear can motivate people to act, but fear is more effective for responding to immediate threats. In spite of these challenges, Weber said that there are ways to encourage people to change their behavior. Because people often fail to act when they feel powerless, it’s important to share good as well as bad environmental news and to set measurable goals for the public to pursue. Also, said Weber, simply portraying reduced consumption as a gain rather than a loss in pleasure could inspire people to act.
References and Further Reading:
7. Stanovich, K.E., & West, R.F. (2000). Individual differences in reasoning: Implications for the rationality debate.
Behavioral & Brain Sciences, 23, 645–665.
8. Reyna, V.F., & Lloyd, F. (2006). Physician decision making and cardiac risk: Effects of knowledge, risk perception, risk
tolerance, and fuzzy processing. Journal of Experimental Psychology: Applied, 12, 179–195.
9. Reyna, V.F. (2004). How people make decisions that involve risk: A dual-processes approach. Current Directions in
Psychological Science, 13, 60–66.
10. Baird, A.A., & Fugelsang, J.A. (2004). The emergence of consequential thought: Evidence from neuroscience.
Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 359, 1797–1804.
11. Reyna, VF., & Farley, F. (2006). Risk and rationality in adolescent decision making. Psychological Science in the Public
Interest, 7, 1–44.
12. Milkman, K.L., Chugh, D., & Bazerman, M.H. (2009). How can decision making be improved? Perspectives on
Psychological Science, 4, 379–383.
14. See Wargo, E. (2007). More than just the facts: Helping patients make informed choices. Cornell University Department
SALMS is excited to announce the opening of 2012 Officer Selection process, and to prepare for the new year with a Board meeting on Friday, 1/27 at noon in Houser 101:
1. NEW OFFICER SELECTION: In the next few weeks, SALMS will begin a transition from its current officer class to the leadership that will direct SALMS into the New Year. Tentative Officer titles and descriptions for the 2012 year include:
– responsible for setting the vision and agenda of the organization, for delegating responsibilities to the SALMS officers and Board, and for collaborating with the Vice President to manage the daily operations of the organization (including managing logistics of Speakers Series events).
ii. Vice President and Treasurer
– responsible for managing the SALMS budget and collaborating with the president to manage the daily operations of the organization (including managing logistics of Speakers Series events).
iii. Speakers Chair
– responsible for organizing and overseeing the selection process for the SALMS Speakers Series, as well as managing invitations and coordinating with speakers.
1Ls interested in serving in these positions should email dkorn[at]jd13.law.harvard.edu to schedule a meeting (please include a copy of your resume, though no prior mind science background is required).
2. SPRING ORGANIZATIONAL MEETING: At noon on Friday, January 27, 2012, in Hauser 101, the SALMS Board will meet to discuss the upcoming semester. In addition to dividing up responsibilities for the spring, we will look ahead to our scheduled Speakers Series events.
Matty McFeely, former President of SALMS and current 3L, just had a situationist-inspired letter to the editor published in The New Yorker. The article to which he was responding (by Rachel Aviv’s “No Remorse,” January 2, 2012) was about a 15-year-old sentenced to life without parole for shooting his grandfather. Before the murder, the boy’s girlfriend had just dumped him and a number of other things weren’t going his way, and the article asked whether putting a minor away for life was appropriate. Matty’s letter read as follows:
Aviv’s article forces us to rethink the justice system’s treatment of young adults, but it should also be a call for stricter gun control. It was too simple for Eliason to take “his grandfather’s loaded gun off the coatrack” and then shoot his grandfather. Eliason’s grim tale shows what surveys have already told us: the availability of guns is linked to higher rates of both suicide and homicide. A teen-ager’s rather routine funk became a senseless tragedy because a lethal device was at hand. A person’s situation has a lot of power over his or her behavior; we would be wise to recognize that fact and shape our situations accordingly.
Cruelty, violence, badness… This episode of Radiolab, we wrestle with the dark side of human nature, and ask whether it’s something we can ever really understand, or fully escape.
We begin with a chilling statistic: 91% of men, and 84% of women, have fantasized about killing someone. We take a look at one particular fantasy lurking behind these numbers, and wonder what this shadow world might tell us about ourselves and our neighbors. Then, we reconsider what Stanley Milgrim’s famous experiment really revealed about human nature (it’s both better and worse than we thought). Next, we meet a man who scrambles our notions of good and evil: chemist Fritz Haber, who won a Nobel Prize in 1918…around the same time officials in the US were calling him a war criminal. And we end with the story of a man who chased one of the most prolific serial killers in US history, then got a chance to ask him the question that had haunted him for years: why?
For generations, social psychology students have read that Norman Triplett did the first social psychology experiment in 1889, when he found that children reeled in a fishing line faster when they were in the presence of another child than when they were alone.
But almost everything about that sentence is wrong. The new paper’s author, Wolfgang Stroebe of Utrecht University in the Netherlands, had recently published a handbook on the history of social psychology (with Aria W. Kruglanski) when he came across a 2005 reanalysis of Triplett’s data and dug farther.
It turned out that the children in the study were turning a reel, but not reeling in a fishing line, and that Triplett was studying whether children performed better with competition. For his study, he eyeballed the data—an acceptable scientific practice in the 19th century—and decided that some children performed better when competing, some performed worse, and others were not affected. The 2005 analysis found that these results were not statistically significant by modern standards.
So the modern textbooks have the details of the study wrong. But they’re also wrong that Triplett was the first psychologist to look at how people are affected by each other.
In the 1880s, Max Ringelmann studied whether workers pulled harder when they were together than when they worked alone. In 1894, Binet and Henri published a study of social influence among children and in 1887, Charles Féré authored a book that described experiments on how the presence of others could increase individual performance. But the field didn’t find its modern identity until 1924, says Stroebe, when Floyd Allport published a textbook defining social psychology as the experimental study of social behavior.
“I think the more interesting fact is that in the 1890s so many authors tried to answer questions relevant to social psychology with experimental methods,” Stroebe says. “This is much more important than to figure out who was really the first author.”
It’s time to fix the textbooks, Stroebe says. “I especially tried to get the article into a major journal in the hope that authors will take more notice of it than of articles published in historical journals.” He thinks his paper is important even though it isn’t at the cutting edge of research. “I was trained many decades ago in a period where one would have considered correcting the history of the origin of an important subfield of psychology to be important,” Stroebe writes in the conclusion of his article. “We even had a word for it. We called it scholarship.”
Peruse dozens of Situationist posts about classic social psychological experiments here.
Edited by Situationist Editor Jon Hanson, Ideology, Psychology, and Law examines the sometimes unsettling interactions between psychology, ideology, and law and elucidates the forces, beyond and beneath the logic, that animate the law.
Here is some of the glowing praise for the volume from, among others, several Situationist Contributors:
“Ideology, Psychology, and Law is a revolution in the making. Encyclopedic in its breadth, this volume captures a moment – like the early heady days of the law and economics movement – when bold, new inquiries are suddenly possible. For those who still cling to the centrality of preferences and incentives, thisbook will be usefully threatening.”
~ Ian Ayres, William K. Townsend Professor, Yale Law School, and author of Carrots and Sticks: Unlock the Power of Incentives to Get Things Done
“This volume is the first of its kind, employing the latest mind science research to illuminate the motivated and unconscious inspirations for ideology, law, and policy. The superbly edited and timely volume is a highly accessible, interdisciplinary collection, bringing together the perspectives and insights of many of the world’s most thoughtful and influential social psychologists, political scientists, and legal scholars. It is essential reading for anyone who wants to better understand the psychological winds buffeting our institutions of collective governance.”
~Philip G. Zimbardo, Professor Emeritus of Psychology, Stanford University
“With this collection, Jon Hanson and the contributors to this volume have gone a long way towards breaking the iron grip that Law and Economics have held on serious legal policy analysis. By incorporating insights from psychology and other behavioral and mind sciences, this volume maps animportant and inspiring interdisciplinarity that will guide path breaking work in the future.”
~ Gerald Torres and Lani Guinier, co-authors of The Miner’s Canary: Enlisting Race, Resisting Power, Transforming Democracy
“This volume shows what ideology is and does. The chapters written by psychologists demonstrate that there is little about the mind’s work that can be called ‘neutral.’ The legal scholars who contribute to this volume push forward to ask how the law must itself bend toward justice, if such is the case. This compendium contains facts and ideas that, if heeded, may bring the law closer to the aspiration that everybody be equal before the law.”
~ Mahzarin R. Banaji, Cabot Professor of Social Ethics, Department of Psychology, Harvard University
“Insightful, comprehensive, boundary-spanning. Hanson pulls together research and ideas from multiple disciplines to create a new way of looking at the most important legal questions of our time.”
~ Sheena S. Iyengar, S.T. Lee Professor of Business, Columbia Business School and author of The Art of Choosing
From People’s World(an article summarizing recent research by Situationist Contributor Susan Fiske):
Why do people commit atrocities? What is responsible for brutality and the cold blooded murder of innocents carried out by Nazis, the Hutu in Rwanda, or the United States against the Vietnamese people and more recently much of the civilian population of Iraq? Some scientists believe they have found the answer.
ScienceDaily reports (“Brain’s Failure to Appreciate Others May Permit Human Atrocities,” 12-14-2011) that the part of the brain responsible for social interaction with others may malfunction resulting in callousness leading to inhumane actions towards others. Scientists at Duke and Princeton have hypothesized, in a recent study, that this brain area can “disengage” when people encounter others they think are “disgusting” and the resulting violence perpetrated against them is due to thinking these objectified others have no “thoughts and feelings.”
The study, according to ScienceDaily, considers this a “shortcoming” which could account for the genocide and torture of other peoples. Examples of this kind of objectification can be seen in the calling of Jews “vermin” by the Nazis, the Tutsi “cockroaches” by the Hutu, and the American habit of calling others “gooks” (as well as other unflattering terms).
Lasana Harris (Duke) says, “When we encounter a person, we usually infer something about their minds [do they have more than one?] Sometimes, we fail to do this, opening up the possibility that we do not perceive the person as fully human.” I wonder about this? What is meant by fully human? Surely the Hutu, for example, who had lived with the Tutsi for centuries, did not really fail to infer that they had “minds.”
Practicing something called “social neuroscience” which seems to consist of showing different people pictures while they are undergoing an MRI and then drawing conclusions from which areas of the brain do or do not “light up” when asked questions about these pictures, the scientists conducting this study discovered that an area of the brain dealing with “social cognition”– i.e., feelings, thoughts, empathy, etc., “failed to engage” when pictures of homeless people, drug addicts, and others “low on the social ladder” were shown.
Susan Fiske (Princeton) remarked, “We need to think about other people’s experience. It’s what makes them fully human to us.” ScienceDaily adds the researchers were struck by the fact that “people will easily ascribe social cognition– a belief in an internal life such as emotions– to animals and cars, but will avoid making eye contact with the homeless panhandler in the subway.”
This is the fourth in our series of posts intended to help our readers with their New Year’s resolutions. From USA Today, here is a brief description of research recently co-authored by Kristin Laurin and Situationist Contributors Aaron Kay and Gráinne Fitzsimons .
God references slipped into tests decreased student’s belief that they controlled their own destiny, researchers report, but made them more resistant to junk food temptation.
In the current Journal of Personality and Social Psychologystudy, six experiments on engineering students, researchers led by Kristin Laurin of Canada’s University of Waterlooo reported that just mentioning the Supreme Being in tests affected student self-perceptions and self-control, regardless of their fundamental religious views.
In the first set of tests, the research team gave half the students word-game type-tasks, telling them the tests were indicators of future achievement. Half the tests included references to religion in the sentences read by the students, while the rest contained reference to merely pleasant things, such as the sun, instead.
The result? Religion references dropped student views significantly on how much they felt in control of their careers.
However, in the last three experiments the team slipped religious references into similar tasks tests, but then checked student ability to resist junk food and sweets.
The result? Religion references increased the student’s ability to resist temptation. Most remarkable, the effect seemed independent of the depth of the engineers’ piety.
Given how often religious references crop up in daily life, the study authors suggest that they may play a role in even the most godless person’s psychology, and call for more research to confirm their finding.
(Citation: Laurin, K., Kay., A. C., & Fitzsimons, G. M. (in press). Divergent effects of activating thoughts of god on self-regulation. Journal of Personality and Social Psychology – pdf available here.)
For more on the situation of eating, see Situationist contributors Adam Benforado, Jon Hanson, and David Yosfion’s law review article Broken Scales: Obesity and Justice in America. For a listing of numerous Situaitonist posts on the situational sources of obesity, click here.
This is the third in our series of posts intended to help our readers with their New Year’s resolutions. From The Sun Herald, here is a brief description of recent research on the benefits of retraining your brain.
What does it really take to change a habit? It may have less to do with willpower and more to do with consistency and a person’s environment, researchers have found.
A 2009 study in the European Journal of Social Psychology had 96 people adopt a new healthful habit over 12 weeks – things like running for 15 minutes at the same time each day or eating a piece of fruit with lunch. The average number of days it took for participants to pick up the habit was 66, but the range was huge, from 18 to 254 days.
Those who chose simple habits, such as drinking a glass of water, did better overall than those who had more involved tasks, such as running.
Skipping a day here and there didn’t seem to derail things, but greater levels of inconsistency did. Erratic performers tended not to form habits.
The same study also found that having a cue for when or where you performed the habit acted as a reminder and helped to make the habit stick. By always exercising in the morning you’re reminded that when you get up, it’s time to head to the gym. Consistently eating meals at the dining table takes away the urge to eat while sitting on the sofa with the television on.
Contrary to popular belief, adopting more healthful routines may have little to do with how much resolve someone has, says Wendy Wood, provost professor of psychology and business at the University of Southern California.
“We tell ourselves that if only we had willpower we’d be able to exercise every day and avoid eating bags of chips,” she says. “But those behaviors are difficult to control because we have patterns that are cued by the environment” – patterns that we’ve learned from past bad habits.
We’ve learned to associate being in the car with eating from fast-food restaurant drive-throughs, so that when we’re out running errands we find ourselves wanting a burger and fries, perhaps when we’re not even hungry.
We’ve learned to associate arriving home with collapsing in front of the TV, and arriving at work with taking the elevator.
We go to the movies and automatically purchase a giant drum of buttery popcorn – and once the habit is formed, we’ll eat the popcorn even if it tastes bad, Wood has found.
In a study she coauthored that was published in 2011 in the journal Personality and Social Psychology Bulletin, moviegoers were given fresh or stale popcorn to snack on while watching trailers.
People who were avid popcorn-eaters ate the same amount of stale popcorn as fresh: They evidently were snacking mindlessly. In contrast, those who didn’t have a movie-popcorn habit ate less stale popcorn than fresh.
“Once these habits become cued by the environment,” Wood says, “they tend to continue whether people are enjoying them or not.”
Wood suggests devising new activities to link to our environmental cues.
At the movie theater, instead of getting a large popcorn, get a small one or drink water instead. Soon you’ll associate movies with those new choices. Take the stairs the minute you walk into the building where you work – soon you’ll associate arriving at work with stair-climbing.
Instead of succumbing to the habit of snacking while sitting on the sofa and watching TV, use the time instead to do some simple exercises. After a while … you get the idea.
It takes some thought in the beginning, Wood says, “but once you’ve figured it out, it runs on its own. You’ve outsourced your behavior to the environment.”
For more on the situation of eating, see Situationist contributors Adam Benforado, Jon Hanson, and David Yosfion’s law review article Broken Scales: Obesity and Justice in America. For a listing of numerous Situaitonist posts on the situational sources of obesity, click here.
For many people, ignorance is bliss when it comes to vexing issues like climate change, according to a new study.
Published last month in the Journal of Personality and Social Psychology, the report shows that people who know very little about an issue — say the economic downturn, changes in the climate or dwindling fossil fuel reserves — tend to avoid learning more about it. This insulates them in their ignorance — a pattern described by researchers as “motivated avoidance.”
Faced with complicated or troubling situations, these people often defer to authorities like the government or scientists, hoping they have the situation under control.
“Our research suggests that this kind of overwhelmed feeling, and feeling that an issue is ‘above one’s head’ leads people to feel dependent on the government, and this dependence is managed by trusting the government more to deal with an issue, and this is managed by avoiding the issue,” explained Steven Shepherd, a social psychology doctoral student at the University of Waterloo in Canada and an author of the report, in an email.
“This is psychologically easier than taking a significant amount of time to learn about an issue, all the while confronting unpleasant information about it,” he added.
The report used survey data from 511 participants between 2010 and 2011. “In four studies we manipulated how we framed a domain like the economy or energy (e.g., simple or complex), and in the one study, we manipulated whether or not a future oil shortage was said to be an immediate problem, or a distant future problem,” Shepherd said.
The researchers found that people who received complex information on an issue felt more helpless and more trusting in government compared to those who received relatively simple explanations. In addition, people who felt ignorant on a certain topic — especially issues with dire consequences like fuel shortages or climate change — would reject negative information.
But researchers say there’s more to it than just plugging your ears and saying “la la la.”
The trust-and-avoid ploy
Motivated avoidance stems from a phenomenon known as system justification. “It refers to a motivation that most people hold to believe that the systems that they function with are legitimate,” explained [Situationist Contributor] Aaron Kay, another author. Kay, who is an associate professor of psychology and neuroscience at Duke University, explained that people working within a government agency or large institution can’t really influence the collective group on their own.
So they are inclined to conclude the group largely knows what it’s doing. “It doesn’t always imply that people think this is good, but they think it’s better than the government not being in control,” he said. To maintain this view, he noted, people will deliberately avoid information that contradicts it.
“Climate change is a global issue that, seemingly, is beyond the efforts of any one individual. … I think a lot of people feel unable to do anything about it,” said Shepherd. “The next best thing is to either deny it, or defer the issue to governments to deal with it. … In our research we find that one easy way to maintain that psychologically comforting trust that an issue is being dealt with is to simply avoid the issue.”
The authors also speculate that political leanings play into whether people want to trust politicians handle climate change. “I think we see this in the recent ‘Occupy’ movements, and among those pushing for governments to do more about climate change,” Shepherd said.
“People who simply distrust the government to begin with, or libertarians who prefer to have as little government involvement in their lives as possible, are also unlikely to respond to feeling dependent on the government by trusting in them more.”
In-house lawyers are under considerable pressure to “get comfortable” with the legality and legitimacy of client goals. This paper explores the psychological forces at work when inside lawyers confront such pressure by reference to the recent financial crisis, looking at problems arising from informational ambiguity, imperceptible change, and motivated inference. It also considers the pathways to power in-house, i.e., what kinds of cognitive styles are best suited to rise in highly competitive organizations such as financial services firms. The paper concludes with a research agenda for better understanding in-house lawyers, including exploration of the extent to which the diffusion of language and norms has reversed direction in recent years: that outside lawyers are taking cognitive and behavioral cues from the insiders, rather than establishing the standards and vocabulary for in-house lawyers.
Since the early 2000s, much of Jon Hanson’s (and other Situationist Contributor’s) research, writing, teaching, and speaking has focused on the role of “choice,” “the choice myth,” and “choicism” in rationalizing injustice and inequality, particularly in the U.S. (e.g., The Blame Frame: Justifying (Racial) Injustice in America). That work has helped to inspire a significant amount of fascinating experimental research (and, unfortunately, one derivative book) on the topic. Over the next couple of months, we will highlight some of that intriguing new research on The Situationist.
Here is an abstract and excerpts from a fascinating article (forthcoming, Psychological Science – pdf of draft here) co-authored by Situationist friend Krishna Savani (Columbia) and Aneeta Rattan (Stanford). Their article examines how “a choice mindset increases the acceptance and maintenance of wealth inequality.”
* * *
Abstract: Wealth inequality has significant psychological, physiological, societal, and economic costs. We investigate how seemingly innocuous, culturally pervasive ideas can help maintain and further wealth inequality. Specifically, we test whether the concept of choice, which is deeply valued in American society, leads people to act in ways that maintain and perpetuate wealth inequality. Choice, we argue, activates the belief that life outcomes stem from personal agency, not from societal factors, leading people to justify wealth inequality. Six experiments show that when choice is highlighted, people are less disturbed by facts about the existing wealth inequality in the U.S., more likely to underestimate the role of societal factors in individuals’ successes, less likely to support the redistribution of educational resources, and less likely to tax the rich even to resolve a government budget deficit crisis. The findings indicate that the culturally valued concept of choice contributes to the maintenance of wealth inequality.
* * *
Wealth inequality has substantial negative consequences for societies, including reduced well-being (Napier & Jost, 2008), fewer public goods (Frank, 2011; Kluegel & Smith, 1986), and even lower economic growth (Alesina & Rodrik, 1994). Despite these well-known negative consequences, high levels of wealth inequality persist in many nations. For example, the U.S. has the greatest degree of wealth inequality among all the industrialized countries in terms of the Gini Coefficient (93rd out of 134 countries; CIA Factbook, 2010). Moreover, wealth inequality in the U.S. substantially worsened in the first decade of the 21st century, with median household income in 2010 equal to that in 1997 (U.S. Census Bureau, 2011), although per-capita GDP increased by 33% over the same period (Bureau of Economic Analysis, 2011), indicating that all of the gain in wealth was concentrated at the top end of the wealth distribution.
A large majority of Americans disapprove of a high degree of wealth inequality (Norton & Ariely, 2011), for example, when the top 1% of people on the wealth distribution possess 35% of the nation’s wealth, as was the case in the U.S. in 2007 (Wolff, 2010). Instead, people prefer a more equal distribution of wealth that includes a strong middle class, such as when the middle 60% of people own approximately 60% of the nation’s wealth, rather than only the 15% that they owned in the U.S. in 2007. If people are unhappy with wealth inequality, then policies that reduce this inequality should be widely supported, particularly in times of increasing wealth inequality. However, Americans often oppose specific policies that would remedy wealth inequality (Bartels, 2005). For example, taxation and redistribution—taxing the rich and using the proceeds to provide public goods, public insurance, and a minimum standard of living for the poor—is probably the most effective means for reducing wealth inequality from an economic perspective (Frank, 2011; Korpi & Palme, 1998). However, most Americans, including working class and middle class citizens, have supported tax cuts even for the very rich and oppose government spending on social services that would mitigate inequality (Bartels, 2005; Fong, 2001). What factors explain thisinconsistency between a general preference for greater wealth equality and opposition to specific policies that would produce it? We investigate whether people’s attitudes toward wealth inequality and support for policies that reduce wealth inequality are influenced by the concept of choice.
Choice is a core concept in U.S. American culture . . . .
Recent research suggests that the concept of choice decreases support for societally beneficial policies (e.g., a tax on highly polluting cars) but increases support for policies furthering individual rights (e.g., legalizing drugs; Savani, Stephens, & Markus, 2011). Historical analyses also suggest that Americans often use the concept of choice to justify inequality, arguing that the poor are poor because they made bad choices (Hanson & Hanson, 2006; see also Stephens & Levine, 2011). Building upon this work, we theorized that the assumption that people make free choices, when combined with the fact that some people turned out rich and others poor, leads people to believe that inequality in life outcomes is justified and reasonable. Therefore, when people think in terms of choice, we hypothesized that they would be less disturbed by wealth inequality and less supportive of policies aimed at reducing this inequality. . . .
From Duke Today, a story about recent research by Situationist Contributor Susan Fiske:
A father in Louisiana bludgeoned and beheaded his disabled 7-year-old son last August because he no longer wanted to care for the boy.
For most people, such a heinous act is unconscionable.
But it may be that a person can become callous enough to commit human atrocities because of a failure in the part of the brain that’s critical for social interaction. A new study by researchers at Duke University and Princeton University suggests this function may disengage when people encounter others they consider disgusting, thus “dehumanizing” their victims by failing to acknowledge they have thoughts and feelings.
This shortcoming also may help explain how propaganda depicting Tutsi in Rwanda as cockroaches and Hitler’s classification of Jews in Nazi Germany as vermin contributed to torture and genocide, the study said.
“When we encounter a person, we usually infer something about their minds. Sometimes, we fail to do this, opening up the possibility that we do not perceive the person as fully human,” said lead author Lasana Harris, an assistant professor in Duke University’s Department of Psychology & Neuroscience and Center for Cognitive Neuroscience. Harris co-authored the study with Susan Fiske, a professor of psychology at Princeton University.
Social neuroscience has shown through MRI studies that people normally activate a network in the brain related to social cognition — thoughts, feelings, empathy, for example — when viewing pictures of others or thinking about their thoughts. But when participants in this study were asked to consider images of people they considered drug addicts, homeless people, and others they deemed low on the social ladder, parts of this network failed to engage.
What’s especially striking, the researchers said, is that people will easily ascribe social cognition — a belief in an internal life such as emotions — to animals and cars, but will avoid making eye contact with the homeless panhandler in the subway.
“We need to think about other people’s experience,” Fiske said. “It’s what makes them fully human to us.”
The duo’s previous research suggested that a lack of social cognition can be linked to not acknowledging the mind of other people when imagining a day in their life, and rating them differently on traits that we think differentiate humans from everything else.
This latest study expands on that earlier work to show that these traits correlate with activation in brain regions beyond the social cognition network. These areas include those brain areas involved in disgust, attention and cognitive control.
The result is what the researchers call “dehumanized perception,” or failing to consider someone else’s mind. Such a lack of empathy toward others can also help explain why some members of society are sometimes dehumanized, they said.
For this latest study, 119 undergraduates from Princeton completed judgment and decision-making surveys as they viewed images of people. The researchers sought to examine the students’ responses to common emotions triggered by images such as:
— a female college student and male American firefighter (pride);
— a business woman and rich man (envy);
— an elderly man and disabled woman (pity);
— a female homeless person and male drug addict (disgust).
After imagining a day in the life of the people in the images, participants next rated the same person on various dimensions. They rated characteristics including the warmth, competence, similarity, familiarity, responsibility of the person for his/her situation, control of the person over their situation, intelligence, complex emotionality, self-awareness, ups-and-downs in life, and typical humanity.
Participants then went into the MRI scanner and simply looked at pictures of people.
The study found that the neural network involved in social interaction failed to respond to images of drug addicts, the homeless, immigrants and poor people, replicating earlier results.
“These results suggest multiple roots to dehumanization,” Harris said. “This suggests that dehumanization is a complex phenomenon, and future research is necessary to more accurately specify this complexity.”
The sample’s mean age was 20, with 62 female participants. The ethnic composition of the Princeton students who participated in the study was 68 white, 19 Asian, 12 of mixed descent, and 6 black, with the remainder not reporting.
Professor Caroline Forell has written a wonderfully thoughtful, situationist article, titled “McTorts: The Social and Legal Impact of McDonald’s Role in Tort Suits (forthcoming in Volume 24 of the Loyola Consumer Law Review) on SSRN. Here’s the abstract.
* * *
McDonald’s is everywhere. With more than 32,000 restaurants around the world, its Golden Arches and “Mc” conjure up both the good and the bad about American capitalism.
This article looks at McDonald’s, impact on public policy, and tort law from historical and social psychology perspectives, following McDonald’s from its beginnings in the mid-1950’s through today. By examining McDonald’s Corp. v. Steel and Morris (McLibel), Liebeck v. McDonald’s Restaurants (Hot Coffee), and Pelman v. McDonald’s Corp. (Childhood Obesity), I demonstrate that certain tort cases involving McDonald’s have had particularly important social and legal consequences that I attribute to McDonald’s special influence over the human psyche, beginning in childhood. In explaining McDonald’s extraordinary power over the public imagination and how this affects lawsuits involving it, I rely on the social psychology approach called situationism that recognizes the strong effect that environmental influences can have on individual decision-making. I conclude that lawsuits involving McDonald’s have had and will continue to have important social and legal consequences because of the unique role this corporation plays in our lives.
* * *
Download the paper for free here.
Related Situationist posts:
A new study finds that atheists are among society’s most distrusted group, comparable even to rapists in certain circumstances.
Psychologists at the University of British Columbia and the University of Oregon say that their study demonstrates that anti-atheist prejudice stems from moral distrust, not dislike, of nonbelievers.
“It’s pretty remarkable,” said Azim Shariff, an assistant professor of psychology at the University of Oregon and a co-author of the study, which appears in the current issue of Journal of Personality and Social Psychology.
The study, conducted among 350 Americans adults and 420 Canadian college students, asked participants to decide if a fictional driver damaged a parked car and left the scene, then found a wallet and took the money, was the driver more likely to be a teacher, an atheist teacher, or a rapist teacher?
The participants, who were from religious and nonreligious backgrounds, most often chose the atheist teacher.
The study is part of an attempt to understand what needs religion fulfills in people. Among the conclusions is a sense of trust in others.
“People find atheists very suspect,” Shariff said. “They don’t fear God so we should distrust them; they do not have the same moral obligations of others. This is a common refrain against atheists. People fear them as a group.”
Shariff, who studies atheism and religion, said the findings provide a clue to combating anti-atheism prejudice.
“If you manage to offer credible counteroffers of these stereotypes, this can do a lot to undermine people’s existing prejudice,” he said. “If you realize there are all these atheists you’ve been interacting with all your life and they haven’t raped your children that is going to do a lot do dispel these stereotypes.”
Last spring, I had the pleasure of participating in the 2011 PLMS Conference: The Psychology of Inequality. As chronicled on this blog (and elsewhere), it was a tremendous group of speakers and many of the talks have continued to resonate as issues of inequality have continued to boil up, particularly in the form of the Occupy movement.
One of the issues that is particularly interesting to me is how people react when confronted with evidence of inequality. The psychology is complicated because people’s reactions are contingent on numerous situational variables. For one thing, different people seem to have very different tolerances for inequality. For another, it seems to matter whether the context is one of system threat or general optimism.
A number of mind scientists are busy at work documenting and sorting out these details and in the process we are learning a tremendous amount about how inequality is perpetuated. One of the things that researchers have discovered is that those who are motivated to do so are incredibly adept at minimizing even the starkest data.
To me, that is absolutely staggering information. Read it again: six people on the Forbes list have equivalent wealth to roughly one-third of all American families taken together. To me, this is a clear sign that the need for reform is urgent. But for other people, this isn’t troubling at all. It’s not that they are being facetious; they genuinely view the data differently than I do.
Because of this divergence, I would argue that before we can make any sort of the changes that would address the growing inequality in the United States (which I believe we must do as a society from a both practical and moral perspective), we need to spend more time and energy understanding why some people don’t see a problem in the first place.