The Situationist

Posts Tagged ‘Morality’

Frans De Waal on Morality

Posted by The Situationist Staff on April 12, 2012

Empathy, cooperation, fairness and reciprocity — caring about the well-being of others seems like a very human trait. But Frans de Waal shares some surprising videos of behavioral tests, on primates and other mammals, that show how many of these moral traits all of us share.

Related Situationist posts:

Posted in Altruism, Distribution, Emotions, Morality, Video | Tagged: , , , , | Comments Off on Frans De Waal on Morality

Babies + Fairness = ?

Posted by Adam Benforado on February 21, 2012

Here at The Situationist, we love babies (see here) and we love fairness (see here), and when you put the two together it’s like an apple pie baked inside a cake (see here) . . . or, well, this new article by Stephanie Sloane, Renée Baillargeon, and David Premack:

Two experiments examined infants’ expectations about how an experimenter should distribute resources and rewards to other individuals. In Experiment 1, 19-month-olds expected an experimenter to divide two items equally, as opposed to unequally, between two individuals. The infants held no particular expectation when the individuals were replaced with inanimate objects, or when the experimenter simply removed covers in front of the individuals to reveal the items (instead of distributing them). In Experiment 2, 21-month-olds expected an experimenter to give a reward to each of two individuals when both had worked to complete an assigned chore, but not when one of the individuals had done all the work while the other played. The infants held this expectation only when the experimenter could determine through visual inspection who had worked and who had not. Together, these results provide converging evidence that infants in the 2nd year of life already possess context-sensitive expectations relevant to fairness.

As Sloane explained to ScienceDaily, “We think children are born with a skeleton of general expectations about fairness, and these principles and concepts get shaped in different ways depending on the culture and the environment they’re brought up in. . . . [H]elping children behave more morally may not be as hard as it would be if they didn’t have that skeleton of expectations.”

Related Situationist posts:

Posted in Abstracts, Altruism, Distribution, Morality | Tagged: , , | Leave a Comment »

Paul Bloom at Harvard Law School – Do Babies Crave Justice?

Posted by The Situationist Staff on February 19, 2012

Paul Bloom, Yale psychology professor, will speak at Harvard Law School tomorrow (Monday) in a talk titled “Do Babies Have a Sense of Morality and Justice? Is Kindness Genetic or Learned?”

Professor Bloom will argue that even babies possess a rich moral sense. They distinguish between good and bad acts and prefer good characters over bad ones. They feel pain at the pain of others, and might even possess a primitive sense of justice. But this moral sense is narrow, and many principles that are central to adult morality, such as kindness to strangers, are the product of our intelligence and our imagination; they are not in our genes. He will end with a discussion of the evolution and psychology of purity and disgust.

Paul Bloom is a professor of psychology at Yale University. His research explores how children and adults understand the physical and social world, with special focus on morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is past-president of the Society for Philosophy and Psychology, and co-editor of Behavioral and Brain Sciences, one of the major journals in the field. Dr. Bloom has written for scientific journals such as Nature and Science, and for popular outlets such as The New York Times, The Guardian, and The Atlantic. He is the author or editor of four books, including How Children Learn the Meanings of Words, and Descartes’ Baby: How the Science of Child Development Explains What Makes Us Human. His newest book, How Pleasure Works, was published in June 2010.

Tomorrow’s talk will take place from 12 – 1 pm in Wasserstein Hall, Room 1023. Free Chinese food lunch!

Image from Flickr.

Related Situationist posts:

Posted in Altruism, Events, Evolutionary Psychology, Morality | Tagged: , , | 1 Comment »

RADIOLAB on the Situation of Badness

Posted by The Situationist Staff on January 19, 2012


Cruelty, violence, badness… This episode of Radiolab, we wrestle with the dark side of human nature, and ask whether it’s something we can ever really understand, or fully escape.

We begin with a chilling statistic: 91% of men, and 84% of women, have fantasized about killing someone. We take a look at one particular fantasy lurking behind these numbers, and wonder what this shadow world might tell us about ourselves and our neighbors. Then, we reconsider what Stanley Milgrim’s famous experiment really revealed about human nature (it’s both better and worse than we thought). Next, we meet a man who scrambles our notions of good and evil: chemist Fritz Haber, who won a Nobel Prize in 1918…around the same time officials in the US were calling him a war criminal. And we end with the story of a man who chased one of the most prolific serial killers in US history, then got a chance to ask him the question that had haunted him for years: why?

Go to the RADIOLAB website to listen to the podcast.

Related Situationist posts:

Posted in Classic Experiments, Conflict, History, Morality, Podcasts, Social Psychology | Tagged: , , , | Leave a Comment »

The Situation of Skin

Posted by The Situationist Staff on November 14, 2011

From Psych Central (an article about the recent work of Situationist friend, Kurt Gray):

A new study finds that when men or women look at someone wearing revealing attire they perceive the individual as being more sensitive, yet not as smart.

University of Maryland psychologist Kurt Gray and colleagues from Yale and Northeastern University have published their study in the Journal of Personality and Social Psychology.

In the article the researchers acknowledge the obvious — that it would be absurd to think people’s mental capacities fundamentally change when they remove clothing.

“In six studies, however, we show that taking off a sweater, or otherwise revealing flesh, can significantly change the way a mind is perceived.”

The study is unique as past research, feminist theory and parental admonishments all have long suggested that when men see a woman wearing little or nothing, they focus on her body and think less of her mind.

In the new study, the researchers show that paying attention to someone’s body can alter how both men and women view both women and men.

“An important thing about our study is that, unlike much previous research, ours applies to both sexes. It also calls into question the nature of objectification because people without clothes are not seen as mindless objects, but they are instead attributed a different kind of mind,” says UMD’s Gray.

“We also show that this effect can happen even without the removal of clothes. Simply focusing on someone’s attractiveness, in essence concentrating on their body rather than their mind, makes you see her or him as less of an agent [someone who acts and plans], more of an experiencer.”

Traditional psychological theory suggests that we see the mind of others on a continuum between the full mind of a normal human and the mindlessness of an inanimate object.

This paradigm, termed objectification, suggests that looking at someone in a sexual context — such as in pornography — leads people to focus on physical characteristics, turning them into an object without a mind or moral status.

However, recent findings indicate that rather than looking at others on a continuum from object to human, we see others as having two aspects of mind: agency and experience.

Agency is the capacity to act, plan and exert self-control, whereas experience is the capacity to feel pain, pleasure and emotions. Various factors – including the amount of skin shown – can shift which type of mind we see in another person.

During the study multiple experiments provided support for the two kinds of mind view. When men and women in the study focused on someone’s body, perceptions of agency (self-control and action) were reduced, and perceptions of experience (emotion and sensation) were increased.

Gray and colleagues suggest that this effect occurs because people unconsciously think of minds and bodies as distinct, or even opposite, with the capacity to act and plan tied to the “mind” and the ability to experience or feel tied to the body.

According to Gray, their findings indicate that the change in perception that results from showing skin is not all bad.

“A focus on the body, and the increased perception of sensitivity and emotion it elicits might be good for lovers in the bedroom,” he says.

Researchers also found that a body focus can actually increase moral standing. Although those wearing little or no clothes –or otherwise represented as a body – were seen to be less morally responsible, they also were seen to be more sensitive to harm and hence deserving of more protection.

“Others appear to be less inclined to harm people with bare skin and more inclined to protect them. In one experiment, for example, people viewing male subjects with their shirts off were less inclined to give those subjects uncomfortable electric shocks than when the men had their shirts on,” Gray says.

Practically, the researchers note that in settings where people are primarily evaluated on their capacity to plan and act, a body focus clearly has negative effects.

Seeing someone as a body strips him or her of competence and leadership, potentially impacting job evaluations.


Image from Flickr.

Related Situationist posts:

Posted in Embodied Cognition, Evolutionary Psychology, Morality, Social Psychology | Tagged: , | Leave a Comment »

Whitey Bulger’s Situation

Posted by The Situationist Staff on July 5, 2011

From Northeastern News:

Notorious Boston gangster James “Whitey” Bulger — who eluded authorities for more than 16 years — is accused of murdering 19 people. Here, David DeSteno, associate professor of psychology at Northeastern University, who studies the role of emotion in social cognition and social behavior, assesses the mind of crime figures like Bulger and those who exalt them as heroes.

What drives immoral behavior?

We cannot assume that Whitey Bulger, Anthony Weiner, or other “fallen” individuals were flawed from the start. After all, Whitey’s brother, William Bulger, was raised in the same environment but followed a different trajectory; he ended up becoming the president of the University of Massachusetts. 
The answer, then, to what makes someone “bad?” is found in understanding how character really works. Character, as it turns out, isn’t established early in life and fixed thereafter. It’s always in flux. Our moral behaviors are determined moment to moment by situational influences on the competing mechanisms in our mind. One class of mechanisms focuses on what’s good in the short term. The other class is focused on the long term — what actions, even if they sacrifice short-term benefits, will lead to long-term gain. Cheating or lying, for example, may offer a short-term gain. Cheating or lying too much, however, could lead to getting caught and ostracized, which carries long-term losses.

The more power that an individual possesses, the greater the disconnect between short-term and long-term impulses. With increased power, politicians, corporate CEOs, or mob bosses, for example, tend to view themselves as invulnerable and begin to favor short-term, expedient actions like cheating or aggression. Such power, then, allows the scale of character to tip toward self-serving, and possibly criminal, actions. The potential for vice and virtue resides in each of us. If we forget that, we’re much more likely to act immorally as well.

Some South Boston residents appear to be rooting for Bulger. Why do so many still look at him as a local hero and turn a blind eye to his criminal record?

How we judge a person’s character often has to do with how he “related” to us. Work in my lab shows that whether we’re willing to condemn someone for committing a transgression doesn’t depend solely on the objective facts. For one study, we asked participants to put on one of two different colored wristbands and then watch a staged interaction between two actors, which participants thought was real. In the scenario, one actor cheated on a task that left the other with more work to complete. We then asked our research participants to judge how fairly the cheater acted. What we found was quite astonishing: If the actor who cheated was wearing the same color wristband as a participant, then the participant viewed his actions as much less objectionable than did participants wearing a different color wristband. Feeling some level of similarity with the perpetrator leads one to excuse his behavior.

This simple example shows how deeply social bonds can alter moral judgments. The people in Southie who still look at Whitey as a hero would probably condemn another individual from New York who committed the same crimes.

For 16 years, Bulger lived life on the lam with his partner Catherine Greig, whom he must have trusted not to turn him in to the authorities. What role may trust have played in their relationship?

Trust is a fundamental part of the human condition. We have to trust people because we need others to survive. Trusting another person presents an interesting dynamic because it offers the potential for joint gain, or asymmetric loss. If both individuals are trustworthy, both can benefit. If, on the other hand, one “sells out,” then he or she can gain at the other’s expense. How much we’re willing to trust another person depends on several factors, but a primary one is the extent to which outcomes are joined.

In the case of Whitey Bulger and Catherine Greig, both faced prison sentences if the other broke ranks. Each knew enough of the other’s secrets, habits and finances that if one didn’t support the other, he or she would have a lot to loose. Having said that, work in our lab shows that trustworthiness is changeable. We can be very trustworthy with one person in one situation, but completely untrustworthy with another. Just because Whitey Bulger and Catherine Greig appear to have acted in a trustworthy manner with each other, does not indicate how they might deal with someone else.

Related Situationist posts:

Posted in Emotions, Morality | Tagged: , , , | 1 Comment »

Memory and Morality

Posted by The Situationist Staff on February 18, 2011

Francesca Gino and Sreedhari Desai recently posted their paper, “Memory Lane and Morality: How Childhood Memories Promote Prosocial Behavior” on SSRN.  Here’s the abstract.

* * *

Four experiments demonstrated that recalling memories from one’s own childhood lead people to experience feelings of moral purity and to behave prosocially. In Experiment 1, participants instructed to recall memories from their childhood were more likely to help the experimenter with a supplementary task than were participants in a control condition, and this effect was mediated by self-reported feelings of moral purity. In Experiment 2, the same manipulation increased the amount of money participants donated to a good cause, and self-reported feelings of moral purity mediated this relationship. In Experiment 3, participants who recalled childhood memories judged the ethically-questionable behavior of others more harshly, suggesting that childhood memories lead to altruistic punishment. Finally, in Experiment 4, compared to a control condition, both positively-valenced and negatively-valenced childhood memories led to higher empathic concern for a person in need, which, in turn increased intentions to help.

* * *

Download the paper for free here.

Related Situationist posts:

Posted in Abstracts, Morality | Tagged: , , , , , | Leave a Comment »

Why Do Lawyers Acquiesce in their Clients’ Misconduct? — Part IV

Posted by Sung Hui Kim on August 6, 2010

This is Part IV of my series, exploring the reasons why lawyers acquiesce in their clients’ frauds and other misconduct.  For background, please access Part I, Part II and Part III of this series.  In this segment, I will focus on the relationship between lawyers’ “role ideology”—normative visions about their professional role—and the inclination to “go along to get along” when their high status clients (or, more accurately, high-paying client representatives) want to engage in financial shenanigans that impact our capital markets.

Don’t think this is an issue?  It is now 2010 and we are still recovering from the most serious financial crisis since the Great Depression.  No doubt, some lawyers looked the other way when their client representatives wanted to engage in deception.  The difficulty for researchers like me who want to learn more about this type of problem is that information about the lawyer-client relationship is ordinarily privileged (to be sure, there are a number of exceptions, e.g., the crime-fraud exception).   Luckily, we have the following story of the former associate general counsel of Lehman Brothers, based on some excellent reporting by James Sterngold of (Bloomberg) BusinessWeek, which you can directly access here: Lehman Bros. story.

But here’s a brief synopsis of the news story from BusinessWeek:

Oliver Budde faced a momentous life decision.  In February 2006, he had resigned from his position as associate general counsel from Lehman Brothers, a venerable (and publicly traded) investment bank in which he worked for nine years.  He had been disappointed with the lack of transparency in how his firm had disclosed certain long-term restricted stock units (RSUs) that were granted to senior executives, including former chief executive officer (CEO) Richard S. Fuld Jr.  After raising the issue with his superiors in the general counsel’s office, he was told that Lehman’s outside attorneys at a prestigious law firm had blessed the policy to exclude unvested RSUs from the annual compensation tables in the SEC filings.  Budde disagreed with this aggressive interpretation of the rules and voiced his objections.  Eventually, he quit the firm.

Later on that year, the Securities & Exchange Commission (SEC) announced that it would require the clear reporting of unvested RSUs and other stock-based awards in public filings.  Eager to see if the firm would now fully disclose the controverted RSUs, Budde pored over the proxy statement released in March 2008.  “I looked several times, and my jaw just dropped,” he said.  “What happened to the RSUs?”

After performing some calculations, Budde determined that CEO Fuld’s compensation was $409.5 million, rather than the mere $146 million disclosed in the proxy statement.  Apparently, Lehman had counted only two of fifteen RSU grants. After considering his options, Budde decided to blow the whistle and report Lehman’s noncompliance to the SEC and Lehman’s board of directors.  On April 14, 2008, he sent a detailed two-page e-mail to the SEC’s Division of Enforcement.  After describing Fuld’s failure to disclose more than $250 million in RSU grants, Budde wrote:

The last thing the country needs right now is another investment bank in crisis.  I have wrestled with this over the past five weeks, since I first read the proxy.  This is not a shot at retribution, and I am in no way a disgruntled former employee (disappointed, even disgusted, yes).  I walked away freely from Lehman, and my ethical concerns in a number of areas were no secret to my superiors there. (Sterngold)

For his efforts, Budde received only a form “thank you” letter from the SEC.  His letters to the Lehman board were also ignored.  But Budde’s calculations were supported by a Yale Journal of Regulation article entitled, “The Wages of Failure: Executive Compensation at Bear Stearns and Lehman, 2000-2008” (Bebchuk et al.).  Of course, as it turned out, potential securities fraud was just one of the myriad problems afflicting Lehman at that time.  In September 2008, Lehman Brothers collapsed in the largest bankruptcy in U.S. history. (Sterngold)

The Oliver Budde story raises a number of questions, the answers to which we still do not know.  One wonders: to what extent did the in-house and outside lawyers of Lehman Brothers (other than Budde) actively engage their client representatives (CEO Fuld among others) on whether it was ethically proper to exclude unvested RSUs from the annual compensation tables in the SEC public filings?  Setting aside whether it was explicitly required by the SEC regulations at the time, didn’t the concealment of material amounts of compensation cause the lawyers to at least pause and consider the ethical implications, especially in light of public furor over runaway executive compensation?

My guess is that if those lawyers paused, they didn’t pause for long.  It is likely that by the time this issue arose at Lehman Brothers, experienced lawyers had (more or less) fallen into the habit of analyzing ethical problems in a way that bleaches out the moral content.  Social psychologists call this gradual transformation “ethical fading.”

More provocatively and more generally, I wonder whether societal views about lawyers’ role can in fact contribute to lawyers’ acting unethically, which, of course, belies the notion that lawyers should be professionally independent from their clients.  I’d like to explore the issue of whether the normative visions of the lawyers’ role are the “dog wagging the tail” or the “tail wagging the dog.”

Role Ideology

Lawyers can be professionally molded to accommodate various conceptions of lawyering, with some conceptions creating greater pressures to align with clients than others.  The effect of all these alignment pressures (including the alignment pressures stemming from economic self-interest) is that lawyers’ ethical judgments will sway in the client’s favor.  (To be clear, I am focusing exclusively on lawyers who represent high-paying corporate clients.  I am fully aware that the opposite problem—lawyers exploiting or taking advantage of clients—occurs with many individual or less affluent clients.)

One key variable in determining the strength of an agent’s accountability to her principal is her understanding of the nature of her role as attorney and the ideological or normative commitments that such role entails – role ideology.  Ideologies about the law come with their own particular normative vision of lawyering and the lawyer’s role. Conversely, roles come “ready-made,” packaged by society, with their own sets of ideologies or “normative guidelines and values that give meaning and shape behavior.”  Even an ideology that purports to view the lawyer’s role in “morally neutral” or “agnostic” terms still makes a normative choice that we should view her role in such terms.

Role ideologies serve two functions.  First, they constitute nontrivial ex ante situational influences that define the universe of socially acceptable norms for that role, to whom or what the lawyer is accountable, and what degree of alignment to (or independence from) the de facto principal (i.e., the client representative) is socially appropriate.  When acting in accordance with a role, one simply acts as others expect one to act.  As put by philosopher Gerald Postema, “Although there is a personal or idiosyncratic element in any person’s conception, nevertheless, because the role of lawyer is largely socially defined, significant public or shared elements are also involved.”  Thus, socially defined role ideologies can lend ideological legitimation to a given style of lawyering (e.g., lawyering based on “client supremacy”), making it a more palatable option.  Over time, a lawyer may come to identify with a particular role ideology and come to believe that her unethical choices are in fact entirely consistent, and even possibly endorsed, by such role ideology.

Second, and perhaps more importantly, role ideologies can serve to legitimate any post hoc rationalizations of unethical behavior by framing the ethical problem in a manner that makes it more attractive to act unethically.  As Postema explains, “By taking shelter in the role, the individual places the responsibility for all of his acts at the door of the institutional author of the role.”  For the person who fully identifies with her role, the response “because I am a lawyer,” or more generally “because that’s my job,” suffices as a complete answer to the question “why do that?”  And cognitive dissonance theory predicts that when our internal attitudes do not correspond with our actions, then our internal attitudes are likely to shift to harmonize with our past actions.

In modern legal culture, various role ideologies are available.  At one extreme is the “officer of the court” view, the grand vision of a public-regarding role for lawyers that contemplates a broader professional obligation than to act only in the client’s (or the lawyer’s) self-interest.  Under this model, inside counsel, simply by virtue of being a lawyer, would be accountable not only to her client representative but also to the public. In this world, the alignment generated by accountability to the de facto principal might be partial (at best), since lawyers would not only have to consider management’s (perhaps fraudulent) goals but also the public welfare.  Of course, outside of the legal academy, most lawyers do not live in this world.

At the other extreme, the lawyer’s role is shaped by a “law is the enemy” or “libertarian-antinomian” philosophy (to use Robert Gordon’s nomenclature), which sees regulation contemptuously as nothing more than a tax on business, a hindrance to the wheels of private commerce.  This view is reflected in President Reagan’s inaugural address statements: “[G]overnment is not the solution to our problem; government is the problem.”  At Enron, such a view was endorsed by management: senior managers had conducted a skit in which one of the themes was deceiving the SEC.  Under this view, the lawyer’s role is to assist the client in devising creative ways to circumvent the law regardless of any harm to third parties or the underlying purposes of the law.  As the view that is most hostile to law, the alignment to the de facto principal (who favors unlawful actions) would be strong.

In the middle, two agency-centered conceptions characterize how many lawyers view their role. One traditional conception of lawyering that has found tremendous longstanding support by the organized bar and the rules of professional ethics is that the lawyer should be committed to the “aggressive and single-minded pursuit of the client’s objectives” within, but all the way up to, the limits of the law.  Her zealous advocacy should not be constrained by her own moral sentiments or commitments but only the “objective, identifible bounds of the law.”  Thus, under this model of partisan loyalty, the lawyer is instructed to interpret legal boundaries from the perspective of maximizing client interest.  In this client-centered world, the alignment to the de facto principal would also be strong.

Another middle-of-the-road ideology is the “agnostic” view that law is a “neutral constraint,” and – accordingly — the lawyer’s role is that of an amoral risk-assessor.  This view is characterized by the lack of moral imperative to comply with the law and the lawyer’s moral detachment from the law.  The lawyer’s role is diminished to that of a counselor who games the rules to work around the constraints and lower “tariffs” or “taxes” as much as possible.  While this view is not openly hostile to the law, it is not respectful of and thus corrodes the legitimating force of the law.  The lack of moral imperative to observe the law means that noncompliance is a feasible, even reasonable, business option.

Which role ideologies predominate in today’s corporate legal practice?  In my view, one can find empirical evidence of all these role ideologies with different lawyers and in different contexts.  That said, I think the two middle-of-the-road ideologies dominate modern corporate representation.  You will find the “zealous advocate” model being emulated in litigation practice.  The image and rhetoric of the “zealous advocate” also get invoked every time the legal profession fends off external regulation (e.g., regulatory attempts by the SEC).  (See Lawyer Exceptionalism in the Gatekeeping Wars for more on the external regulation of the American bar.)  You will also find a variant of the “zealous advocate” model that substitutes “adversarialism” with “entrepeneurialism” among transactional lawyers who believe that they are “greasing the wheels of commerce.”  And, in my opinion, you will find many lawyers who view themselves as amoral risk-assessors.  Any of these normative visions of lawyering can be stretched to accommodate unethical behavior.

My guess is that those lawyers who accommodated Lehman Brothers’ desire to be less transparent in their public filings were (at least for the moment) adopting something close to the amoral risk-assessor model (which, frankly, is easy to do in highly technical fields like securities regulation or tax).  In short, they were “just providing advice,” telling clients what the pros and cons of a proposed course of action are and then leaving it to the clients to make the final call, even if that final decision is unethical and/or requires the lawyer’s full-blown assistance to implement.  Many lawyers feel they can engage in this ethical division of labor (“so long as I give accurate advice and not encourage you to break the law, you can do what you want”).

But are these role ideologies the dog wagging the tail or the tail wagging the dog?  This question invariably invokes a longstanding debate in social psychology about the extent to which “reasoned deliberation” influences behavior.  Some think reason plays a large role in explaining human behavior.  Others, like psychologist Jonathan Haidt at Virginia, think that reasons—or more accurately—culturally supplied explanations are more likely to be the rational tail wagging the emotional dog, in other words, a post-hoc construction intended to justify more automatic—and typically, self-interested—judgments.  But Haidt qualifies this position.  He says that since we are highly attuned to group norms (and subject to strong conformity pressures), we are much less likely to engage in conduct that clearly violates those norms.  Accordingly, explicit moral reasoning plays an ex ante role in societies by defining what is or is not acceptable behavior.

* * *

For a more thorough treatment of this topic, please read my more comprehensive works, The Banality of Fraud: Re-Situating the Inside Counsel as Gatekeeper, Gatekeepers Inside Out, and my most recently published article – Lawyer Exceptionalism in the Gatekeeping Wars—which relies on insights from cognitive science to explain why the rhetoric of lawyer exceptionalism, invoked when the legal profession fends off external regulation, is nonetheless so appealing.

For a sample of related Situationist posts, see Part I, Part II and Part III of this series and “How Situational Self-Schemas Influence Disposition,” “Categorically Biased – Abstract,” “The Situation of John Yoo and the Torture Memos,”  “The Affective Situation of Ethics and Mediation,” On the Ethical Obligations of Lawyers,” “From Heavens to Hells to Heroes – Part II,” Person X Situation X System Dynamics,” “Situation” Trumps “Disposition” – Part I & Part II,” andThe Need for a Situationist Morality.”

Posted in Deep Capture, Ideology, Morality, Situationist Contributors, Social Psychology | Tagged: , , , , , , , , , , | 1 Comment »

Stealing from the Blind

Posted by The Situationist Staff on June 15, 2010

Here is another segment from John Quinones excellent ABC 20/20 series titled “What Would You Do?” — a series that, in essence, conducts situationist experiments through hidden-camera scenarios. This episode asks, “Would you help if you witnessed a blind person being given incorrect change? (and includes analysis from social psychologist Carrie Keating).

* * *

* * *

To review a sample of related Situationist posts, see “Journalists as Social Psychologists & Social Psychologists as Entertainers,” “Stop that Thief! (or not),” “Dan Ariely on Cheating,” “The Death of Free Will and the Rise of Cheating,” “Ugly See, Ugly Do,” “When Thieves See Situation,” and Cheating Doesn’t Pay . . . So Why So Much of it?.”

Posted in Life, Morality, Video | Tagged: , , , , | 1 Comment »

Jonathan Haidt – 5 Moral Values Behind Political Choices

Posted by The Situationist Staff on November 25, 2008

In his TedTalk, psychologist Jonathan Haidt describes five moral values that he believes form the basis of our political choices, whether we’re left, right or center.

To read a related Situationist post, see Jonathan Haidt on the Situation of Moral Reasoning.”  To review a collection of posts examining the the situation of ideology, click here.

Posted in Ideology, Morality, Politics, Video | Tagged: , , , , | 3 Comments »

Marc Hauser on the Situation of Morality

Posted by The Situationist Staff on October 1, 2008

On the heals of yesterday’s post about Marc Hauser’s research, we thought the following videos would be of interest to our readers (viewers).

* * *

From TheTechMuseum: Understanding Genetics – An interview with Marc Hauser at the Future of Science Conference in Venice, Italy September 2006.

Part 1 (3:40): You’ve written that the human sense of right and wrong has evolved. If we have a moral instinct, why did it evolve? What are the advantages?.

* * *

Part 2 (1:52): So the ramifications here are enormous, for parenting, school, religion. Isn’t that where most people think they get their sense of right and wrong from?

* * *

Part 3 (2:52): If our moral instinct, and guilt along with it, are inherited, do you foresee a way in the future to pinpoint that this gene does this, or this gene does that?

* * *

Part 4 (3:18): Are we still evolving? If so, is our moral instinct evolving as well?

* * *

Part 5 (3:07): Some think we’re not evolving anymore, that natural selection requires isolation. You don’t share that view?

* * *

Part 6 (4:14): Let’s talk about evolution in the United States. If you don’t accept evolution, how can you learn biology? Or genetics?

* * *

Part 7 (2:28): How do you see the issue of evolution and education?

* * *

For some related Situationist posts, see “The Situation of Innate Morality,” “Moral Psychology Primer,” “Pinker on the Situation of Morality,” and “The Science of Morality.”

Posted in Education, Morality, Neuroscience, Uncategorized, Video | Tagged: , , , , | Leave a Comment »

Law, Psychology & Morality – Abstract

Posted by The Situationist Staff on September 13, 2008

Kenworthey Bilz and Janice Nadler have posted their manuscript “Law, Psychology & Morality.” (forthcoming in Moral Cognition and Decision Making (D. Medin, L. Skitka, C. W. Bauman, & D. Bartels, eds., Academic Press, 2009)) on SSRN.  Here’s the abstract.

* * *

In a democratic society, law is an important means to express, manipulate, and enforce moral codes. Demonstrating empirically that law can achieve moral goals is difficult. Nevertheless, public interest groups spend considerable energy and resources to change the law with the goal of changing not only morally-laden behaviors, but also morally-laden cognitions and emotions. Additionally, even when there is little reason to believe that a change in law will lead to changes in behavior or attitudes, groups see the law as a form of moral capital that they wish to own, to make a statement about society. Examples include gay sodomy laws, abortion laws, and Prohibition. In this Chapter, we explore the possible mechanisms by which law can influence attitudes and behavior. To this end, we consider informational and group influence of law on attitudes, as well as the effects of salience, coordination, and social meaning on behavior, and the behavioral backlash that can result from a mismatch between law and community attitudes. Finally, we describe two lines of psychological research – symbolic politics and group identity – that can help explain how people use the law, or the legal system, to effect expressive goals.

Posted in Abstracts, Law, Legal Theory, Morality, Social Psychology | Tagged: , , , , , | 1 Comment »

Will Wilkinson Interviews Jonathan Haidt

Posted by The Situationist Staff on July 20, 2008

Below is a ten-minute BloggingHeads clip from a one-hour interview of social psychologist Jonathan Haidt.

Vodpod videos no longer available.

To watch the entire video, click here. For a sample of related Situationist posts, see “The Motivated Situation of Morality,” Jonathan Haidt on the Situation of Moral Reasoning,” and “Moral Psychology Primer.”

Posted in Ideology, Morality, Video | Tagged: , , , , , , | 2 Comments »

The Motivated Situation of Morality

Posted by The Situationist Staff on July 15, 2008

A recent story on MSNBC summarizes research indicating “why we’re all moral hypocrites.” Here are a few excerpts.

* * *

Most of us, whether we admit it or not, are moral hypocrites. We judge others more severely than we judge ourselves.

Mounting evidence suggests moral decisions result from the jousting between our knee-jerk responses . . . and our slower, but more collected evaluations. Which is more responsible for our self-leniency?

To find out, a recent study presented people with two tasks. One was described as tedious and time-consuming; the other, easy and brief. The subjects were asked to assign each task to either themselves or the next participant. They could do this independently or defer to a computer, which would assign the tasks randomly.

Eighty-five percent of 42 subjects passed up the computer’s objectivity and assigned themselves the short task – leaving the laborious one to someone else. Furthermore, they thought their decision was fair. However, when 43 other subjects watched strangers make the same decision, they thought it unjust.

* * *

The researchers then “constrained cognition” by asking subjects to memorize long strings of numbers. In this greatly distracted state, subjects became impartial. They thought their own transgressions were just as terrible as those of others.

This suggests that we are intuitively moral beings, but “when we are given time to think about it, we construct arguments about why what we did wasn’t that bad,” said lead researcher Piercarlo Valdesolo, who conducted this study at Northeastern University and is now a professor at Amherst College.

* * *

The researchers speculate that instinctive morality results from evolutionary selection for team players. Being fair, they point out, strengthens mutually beneficial relationships and improves our chances for survival.

So why do we choose to judge ourselves so leniently?

* * *

To read teh entire article, including the answer to that last question, click here.

For related Situationists posts, see “Jonathan Haidt on the Situation of Moral Reasoning,” “Moral Psychology Primer,” “Pinker on the Situation of Morality,” “Our Brain and Morality,” The Situation of Reason,” “I’m Objective, You’re Biased,” “Mistakes Were Made (but not by me),” and “Why We Punish.”

Posted in Conflict, Emotions, Experimental Philosophy, Morality, Social Psychology | Tagged: , , | 1 Comment »

Moral Psychology Primer

Posted by The Situationist Staff on May 27, 2008

Dan Jones has a terrific article in the April issue of Prospect, titled “The Emerging Moral Psychology.” We’ve included some excerpts from the article below.
* * *

Long thought to be a topic of enquiry within the humanities, the nature of human morality is increasingly being scrutinised by the natural sciences. This shift is now beginning to provide impressive intellectual returns on investment. Philosophers, psychologists, neuroscientists, economists, primatologists and anthropologists, all borrowing liberally from each others’ insights, are putting together a novel picture of morality—a trend that University of Virginia psychologist Jonathan Haidt has described as the “new synthesis in moral psychology.” The picture emerging shows the moral sense to be the product of biologically evolved and culturally sensitive brain systems that together make up the human “moral faculty.”

A pillar of the new synthesis is a renewed appreciation of the powerful role played by intuitions in producing our ethical judgements. Our moral intuitions, argue Haidt and other psychologists, derive not from our powers of reasoning, but from an evolved and innate suite of “affective” systems that generate “hot” flashes of feelings when we are confronted with a putative moral violation.

This intuitionist perspective marks a sharp break from traditional “rationalist” approaches in moral psychology, which gained a large following in the second half of the 20th century under the stewardship of the late Harvard psychologist Lawrence Kohlberg. In the Kohlbergian tradition, moral verdicts derive from the application of conscious reasoning, and moral development throughout our lives reflects our improved ability to articulate sound reasons for the verdicts . . . .

But experimental studies give cause to question the primacy of rationality in morality. In one experiment, Jonathan Haidt presented people with a range of peculiar stories, each of which depicted behaviour that was harmless (in that no sentient being was hurt) but which also felt “bad” or “wrong.” One involved a son who promised his mother, while she was on her deathbed, that he would visit her grave every week, and then reneged on his commitment because he was busy. Another scenario told of a man buying a dead chicken at the supermarket and then having sex with it before cooking and eating it. These weird but essentially harmless acts were, nonetheless, by and large deemed to be immoral.

Further evidence that emotions are in the driving seat of morality surfaces when people are probed on why they take their particular moral positions. In a separate study which asked subjects for their ethical views on consensual incest, most people intuitively felt that incestuous sex is wrong, but when asked why, many gave up, saying, “I just know it’s wrong!”—a phenomenon Haidt calls “moral dumbfounding.”

It’s hard to argue that people are rationally working their way to moral judgements when they can’t come up with any compelling reasons—or sometimes any reasons at all—for their moral verdicts. Haidt suggests that the judgements are based on intuitive, emotional responses, and that conscious reasoning comes into its own in creating post hoc justifications for our moral stances. Our powers of reason, in this view, operate more like a lawyer hired to defend a client than a disinterested scientist searching for the truth.

Our rational and rhetorical skill is also recruited from time to time as a lobbyist. Haidt points out that the reasons—whether good or bad—that we offer for our moral views often function to press the emotional buttons of those we wish to bring around to our way of thinking. So even when explicit reasons appear to have the effect of changing people’s moral opinions, the effect may have less to do with the logic of the arguments than their power to elicit the right emotional responses. We may win hearts without necessarily converting minds. . . .

Even if you recognise the tendency to base moral judgements on how moral violations make you feel, you probably would also like to think that you have some capacity to think through moral issues, to weigh up alternative outcomes and make a call on what is right and wrong.

Thankfully, neuroscience gives some cause for optimism. Philosopher-cum-cognitive scientist Joshua Greene of Harvard University and his colleagues have used functional magnetic resonance imaging to map the brain as it churns over moral problems, inspired by a classic pair of dilemmas from the annals of moral philosophy called the Trolley Problem and the Footbridge Problem. [For a review of Greene’s research, clickFootbridge Problem - Image by Isdky (Flickr) here.]

* * *

What is going on in the brain when people mull over these different scenarios? Thinking through cases like the Trolley Problem—what Greene calls an impersonal moral dilemma as it involves no direct violence against another person—increases activity in brain regions located in the prefrontal cortex that are associated with deliberative reasoning and cognitive control (so-called executive functions). This pattern of activity suggests that impersonal moral dilemmas such as the Trolley Problem are treated as straightforward rational problems: how to maximise the number of lives saved. By contrast, brain imaging of the Footbridge Problem—a personal dilemma that invokes up-close and personal violence—tells a rather different story. Along with the brain regions activated in the Trolley Problem, areas known to process negative emotional responses also crank up their activity. In these more difficult dilemmas, people take much longer to make a decision and their brains show patterns of activity indicating increased emotional and cognitive conflict within the brain as the two appalling options are weighed up.

Greene interprets these different activation patterns, and the relative difficulty of making a choice in the Footbridge Problem, as the sign of conflict within the brain. On the one hand is a negative emotional response elicited by the prospect of pushing a man to his death saying “Don’t do it!”; on the other, cognitive elements saying “Save as many people as possible and push the man!” For most people thinking about the Footbridge Problem, emotion wins out; in a minority of others, the utilitarian conclusion of maximising the number of lives saved.

* * *

While there is a growing consensus that the moral intuitions revealed by moral dilemmas such as the Trolley and Footbridge problems draw on unconscious psychological processes, there is an emerging debate about how best to characterise these unconscious elements.

On the one hand is the dual-processing view, in which “hot” affectively-laden intuitions that militate against personal violence are sometimes pitted against the ethical conclusions of deliberative, rational systems. An alternative perspective that is gaining increased attention sees our moral intuitions as driven by “cooler,” non-affective general “principles” that are innately built into the human moral faculty and that we unconsciously follow when assessing social behaviour.

In order to find out whether such principles drive moral judgements, scientists need to know how people actually judge a range of moral dilemmas. In recent years, Marc Hauser, a biologist and psychologist at Harvard, has been heading up the Moral Sense Test (MST) project to gather just this sort of data from around the globe and across cultures.

The project is casting its net as wide as possible: the MST can be taken by anyone with access to the internet. Visitors to the “online lab” are presented with a series of short moral scenarios—subtle variations of the original Footbridge and Trolley dilemmas, as well as a variety of other moral dilemmas. The scenarios are designed to explore whether, and how, specific factors influence moral judgements. Data from 5,000 MST participants showed that people appear to follow a moral code prescribed by three principles:

• The action principle: harm caused by action is morally worse than equivalent harm caused by omission.

• The intention principle: harm intended as the means to a goal is morally worse than equivalent harm foreseen as the side-effect of a goal.

• The contact principle: using physical contact to cause harm to a victim is morally worse than causing equivalent harm to a victim without using physical contact.

Crucially, the researchers also asked participants to justify their decisions. Most people appealed to the action and contact principles; only a small minority explicitly referred to the intention principle. Hauser and colleagues interpret this as evidence that some principles that guide our moral judgments are simply not available to, and certainly not the product of, conscious reasoning. These principles, it is proposed, are an innate and universal part of the human moral faculty, guiding us in ways we are unaware of. In a (less elegant) reformulation of Pascal’s famous claim that “The heart has reasons that reason does not know,” we might say “The moral faculty has principles that reason does not know.”

The notion that our judgements of moral situations are driven by principles of which we are not cognisant will no doubt strike many as implausible. Proponents of the “innate principles” perspective, however, can draw succour from the influential Chomskyan idea that humans are equipped with an innate and universal grammar for language as part of their basic design spec. In everyday conversation, we effortlessly decode a stream of noise into meaningful sentences according to rules that most of us are unaware of, and use these same rules to produce meaningful phrases of our own. Any adult with normal linguistic competence can rapidly decide whether an utterance or sentence is grammatically valid or not without conscious recourse to the specific rules that determine grammaticality. Just as we intuitively know what we can and cannot say, so too might we have an intuitive appreciation of what is morally permissible and what is forbidden.

Marc Hauser and legal theorist John Mikhail of Georgetown University have started to develop detailed models of what such an “innate moral grammar” might look like. Such models usually posit a number of key components, or psychological systems. One system uses “conversion rules” to break down observed (or imagined) behaviour into a meaningful set of actions, which is then used to create a “structural description” of the events. This structural description captures not only the causal and temporal sequence of events (what happened and when), but also intentional aspects of action (was the outcome intended as a means or a side effect? What was the intention behind the action?).

With the structural description in place, the causal and intentional aspects of events can be compared with a database of unconscious rules, such as “harm intended as a means to an end is morally worse than equivalent harm foreseen as the side-effect of a goal.” If the events involve harm caused as a means to the Morality - Image by Joel Duggan, Flickrgreater good (and particularly if caused by the action and direct contact of another person), then a judgement of impermissibility is more likely to be generated by the moral faculty. In the most radical models of the moral grammar, judgements of permissibility and impermissibility occur prior to any emotional response. Rather than driving moral judgements, emotions in this view arise as a by-product of unconsciously reached judgements as to what is morally right and wrong

Hauser argues that a similar “principles and parameters” model of moral judgement could help make sense of universal themes in human morality as well as differences across cultures (see below). There is little evidence about how innate principles are affected by culture, but Hauser has some expectations as to what might be found. If the intention principle is really an innate part of the moral faculty, then its operation should be seen in all cultures. However, cultures might vary in how much harm as a means to a goal they typically tolerate, which in turn could reflect how extensively that culture sanctions means-based harm such as infanticide (deliberately killing one child so that others may flourish, for example). These intriguing though speculative ideas await a thorough empirical test.

* * *

Although current studies have only begun to scratch the surface, the take-home message is clear: intuitions that function below the radar of consciousness are most often the wellsprings of our moral judgements. . . .

Despite the knocking it has received, reason is clearly not entirely impotent in the moral domain. We can reflect on our moral positions and, with a bit of effort, potentially revise them. An understanding of our moral intuitions, and the unconscious forces that fuel them, give us perhaps the greatest hope of overcoming them.

* * *

To read the entire article, click here. To reaad some related Situationist posts, see “Quick Introduction to Experimental (Situationist?) Philosophy,” and “Pinker on the Situation of Morality.”

Posted in Ideology, Morality, Neuroscience, Philosophy | Tagged: , , , , , , , , , , , , | 5 Comments »

Morality and Religion

Posted by The Situationist Staff on April 21, 2008

For a worthwhile discussion on the bloggingheads, check out this exchange between psychologist Paul Bloom and experimental philsopher Joshua Knobe.

Posted in Abstracts, Experimental Philosophy, Morality, Video | Tagged: , , , , | Leave a Comment »

%d bloggers like this: