The Situationist

Archive for February, 2007

Fast Food Litigation

Posted by The Situationist Staff on February 28, 2007

french-fries.gifLianne S. Pinchuk, an associate at Weil Gotshal, has an article today at comparing fast food litigation with cigarette litigation and summarizing the unpromising prospects of the former. The article highlights some of the obstacles to lawyers seeking significant damages from purveyors of fast food — including evidentiary burdens (particularly on the element of causation), and state statutes (the so-called “cheeseburger bills”) immunizing the food industry from certain types of liability. The Situationist would add a further source of protection for the industry: the widely held sense that blame for obesity belongs on those who eat the food, not those who manufacture, market, and sell it.

Pinchuk concludes her article this way:

Despite the lack of success of obesity-related personal injury cases thus far, it is important to remember that when allegations were first made against tobacco companies, the possibility of large verdicts seemed remote. It was only once the litigation reached the discovery phase and negative internal documents were revealed that large plaintiffs’ verdicts became possible. The Big Food cases to date have generally not led to discovery, and only Big Food itself knows what damning documents may exist. If they do exist and are discovered by plaintiffs lawyers, they may provide ammunition for more suits and increasing verdicts. Right now, however, fast food companies are enjoying more protections than tobacco companies ever did, and it appears that Big Food is not the next Big Tobacco.

The point is a good one and reveals a double bind for plaintiffs. If fast-food lawsuits are not viable because of the presumption that consumers are to blame, and if a key way that one can demonstrate the culpability of the industry is through a discovery process that is permitted only when one has a viable cause of action, then it may be that our attributions of blame reflect the failure of lawsuits as much as it is the case that the the failure of lawsuits reflect our assessment of blame.

Posted in Food and Drug Law | Leave a Comment »

The Young and the Lucky

Posted by The Situationist Staff on February 27, 2007


In the March/April 2007 Harvard Magazine , Harbour Fraser Hodder summarizes research by social psychologists Kristina Olson, (Situationist Contributor) Mahzarin Banaji, Elizabeth Spelke, and Carol Dweck, on how children understand and respond to social inequality. Among other findings, the researchers have shown that children seem to favor the lucky to the unlucky – the “lucky effect.” Hodder’s brief article, from which the remainder of this post draws, provides a terrific summary:

Pretend that you’re five years old. A grownup in a white coat tells you about Jane, who found $5 on the sidewalk; Johnny, who was splashed by a passing car; Jim, who helped his mom bake a cake; and Sue, who took a toy from her little brother. After each story the grownup asks, “How much do you like Jane [or Johnny, or Jim, or Sue]?” To answer, you must point to a large frowning face (really don’t like) or a large smiling face (really like) or one of four in-between faces. Which face would you pick?

If you’re like the 32 five- to seven-year-olds in a recent study of bias in children, you’ll surprise everyone with how much more you prefer the lucky to the unlucky kids. As expected, you’ll also know the difference between doing something good or bad on purpose and having something good or bad happen by accident. And you’ll like the kids who intentionally did a good thing over those who did something bad even more than you like the lucky ones. But your preference for more fortunate peers will nevertheless be part of a “whoppingly big” effect, according to doctoral candidate in psychology Kristina Olson, lead author for the research published in Psychological Science. “I don’t think any of us expected the ‘lucky effect’ to be that large.”

Kristina Olson - Photo by Stephanie Mitchell for HU

Would such bias spread to new members of a group experiencing fortune or misfortune? “How might seeing news of Hurricane Katrina influence kids’ perceptions of black people who are living in Ohio?” asks Olson. To begin testing this issue, she and her coauthors and advisers—Cabot professor of social ethics and Pforzheimer professor at Radcliffe Mahzarin Banaji, Berkman professor of psychology Elizabeth Spelke, and Carol Dweck, Ransford professor of psychology at Stanford—conducted a second study. Here 43 five- to seven-year-olds “meet” cartoon children with nearly identical facial expressions but different T-shirt colors. Three of the blue-T kids are lucky (e.g., putting money into a candy machine and getting two candy bars), and three of the green-T kids are unlucky (e.g., riding in the car when it breaks down). The other two in each group simply like to eat oatmeal or ride a bike. Then two new kids appear on screen wearing blue and green shirts but without further description. “Who do you like more?”

The results were “even more shocking” than those of the first study, says Olson. The children “were overwhelmingly more likely to prefer the member of the group that had experienced lucky events, despite the fact that this person did not experience a lucky or unlucky event as far as they knew, and despite the fact that group membership wasn’t perfectly predictive of whether they’d experience a lucky or unlucky event,” she explains.

Katrina Victims (from ethical reasons, Olson and her colleagues chose trivial events to begin their research; they don’t know if children would extend these preferences to more extreme events such as Hurricane Katrina. In such cases, Olson can imagine that “either kids’ evaluations become more extreme, or ideas like empathy come into play and they think, ‘Wait a minute, that’s terrible. How can I help?’”

The lucky effect, notes Olson, could prove to be “one of the early seeds” of system justification, the controversial theory (formulated by social psychologist [and Situationist Contributor] John T. Jost, RF ’03, Banaji, and others in the 1990s) that adults are motivated to defend the status quo as fair and legitimate even at the expense of personal or group self-interest. That theory has been cited as a possible explanation for the persistence of social inequality from one generation to the next. As young children observe some groups experiencing greater fortune or misfortune than others, they may decide, “‘They wouldn’t have such good stuff if they didn’t deserve it’ or ‘They wouldn’t have such bad stuff if they didn’t deserve it,’” Olson explains. “In the process of trying to make sense of the world, they may actually come to justify the world as it is. We have some work in progress [that looks] at whether kids not only like the lucky person more, but actually think the lucky person is a better person.”

Might the lucky effect be a vestigial survival instinct? “If we continue to find these preferences across very different cultures and at even younger and younger ages, then there may be more to tell about whether this could have been evolutionarily beneficial, though the data to date cannot yet address this possibility,” acknowledges Olson. Social psychology has shown that adults tend to prefer individuals and groups who are more fortunate. . . . “It turns out that five-year-olds are just as irrational as adults.”

Posted in Social Psychology, System Legitimacy | 6 Comments »

The Situation of Music

Posted by The Situationist Staff on February 26, 2007

Joyce HattoDenis Dutton, who teaches aesthetics at the University of Canterbury, has an op-ed in today’s New York Times. In it, he describes the late-life blossoming English pianist, Joyce Hatto, who in her 60s and 70s “recorded more than 120 CDs — including many of the most difficult piano pieces ever written, played with breathtaking speed and accuracy.” Dutton explains that she was viewed as “a prodigy of old age,” who had ” an astonishing, chameleon-like artistic ability.” Her husband, who ran the label that recorded her music, recently said of her: “She was a pianist who developed all through her life. It was amazing. She had this wonderful independence of the hands.”

Sound too good to be true? That’s because . . . .

Here, the story becomes a little less uplifting.

Last year, this “greatest living pianist that almost no one has heard of” lost her battle with cancer. And then, very recently, the discovery was made that, in fact, she was not so much a “late bloomer” as a plagiarist.

Her husband, William Barrington-Coupe, had been digitally altering other musicians’ magic and selling it as hers. This scandal has hit the classical music world hard. Andrys Baste, who has been keeping up with the unfolding discoveries on his website, had this to say a few days ago: “this . . . fraud is now considered the most ‘jawdropping’ scandal ever to hit the ‘polite’ world of classical music. A firestorm of talk rages in classical music forums around the globe. And it’s usually not polite :-).”

Then, today, Gramaphone has a story in which it reports the confession, explanation, and apology of Joyce Hatto’s husband, William Barrington-Coupe, who apparently wrote a letter to the classical music magazine explainining that he “deeply regrets what has happened. He feels that he has acted stupidly, dishonestly and unlawfully. However, he maintains that his wife knew nothing of the deception. He also claims that he has not made vast amounts of money from what he has done . . . .” In a later article today in The Telegraph, Barrington-Coupe explained: “It was veryPhoto By Geoff Pugh for The Telegraph complicated and difficult to do, but I just allowed myself to take longer passages than initially because it was easier to fit them in.” He added: “I did it just to keep my wife going and to give her something to live for. She died feeling that it hadn’t all been in vain. She was a very talented musician and piano was her life.”

One question raised in Denis Dutton’s op-ed was how this could have happened. Not, how could Barrington-Coupe (with or without Hatto’s knowledge) have attempted such a hoax? But how could the classical music audience have so easily been taken in?

Dutton suggests that Hatto’s fame grew immensely in the last few years because, the story was so good, and the audience heard what it wanted to hear — magnificent music, by a brilliant, if fading, star who had overcome her age, her struggles with her health, and her anonymity to become not just recognized, but celebrated and admired.

Why didn’t the experts recognize the pieces as someone else’s? According to Dutton, the CDs “usually stole from younger artists who were not household names,” and whose stories were not themselves music to the ears. Perhaps more important:

“the Joyce Hatto episode is a stern reminder of the importance of framing and background in criticism. Music isn’t just about sound; it is about achievement in a larger human sense. If you think an interpretation is by a 74-year-old pianist at the end of her life, it won’t sound quite the same to you as if you think it’s by a 24-year-old piano-competition winner who is just starting out. Beyond all the pretty notes, we want creative engagement and communication from music, we want music to be a bridge to another personality. Otherwise, we might as well feed Chopin scores into a computer.

Dutton then offers what may be the only silver lining of this story:

“The greatest lesson for us all ought to be, however, that there are more fine young pianists out there than most of us realize. If it wasn’t Joyce Hatto, then who did perform those dazzlingly powerful Prokofiev sonatas? Having been so moved by hearing “her” Schubert on the radio, I’ve vowed to honor the real pianist by ordering the proper CD, as soon as I find out who it is. Backhanded credit to Joyce Hatto for having introduced us to some fine new talent.”

Unfortunately, that silver lining may night be so bright, because honoring the musician behind the music may be easier said than done. Our perceptions of the source of music and of its quality may be more closely linked than we recognize or would like to believe.

As is now well known, the a “new Coke” didn’t sell well even though in blind taste tests, consumers preferred its taste. That’s just it: taste isn’t blind, and brand associations matter. The same lesson may not apply to music, but then again it may. No doubt, Joyce Hatto’s star has fallen. But it is not clear that the artists from whom her work “borrowed” will now rise to take her place.
Philadelphia OrchestraSome evidence of that can be found in the experience of American symphony orchestras, which altered their audition policies in the 1970s and ’80s and moved to “blind auditions,” in which candidates were concealed behind a screen when performing for the jury. Since then, female musicians in the top five symphony orchestras have increased from less than 5% to 25% in 2000. According to one study, a substantial chunk of that change can be attributed to the blind auditions.

While that solution is certainly good news, the problem beneath it is consistent with other evidence that our preferences — including our taste in music — tend not to be as neutral and blind as we suppose. The same concerto associated with any other artist may not sound as sweet.

Posted in Emotions, Entertainment | 1 Comment »

Growing Up in a Sexualizing Situation

Posted by The Situationist Staff on February 26, 2007

APA Image

The American Psychological Association (APA) published a fascinating and important report this month examining the proliferation of sexualizing and objectifying images of girls and young women in the media. The report summarizes evidence of how those images may have a variety of negative consequences for girls as well as for others in our culture.

Dr Eileen Zurbriggen, Chair of the APA Task Force, explains that there is now “ample evidence to conclude that sexualization has negative effects in a variety of domains, including cognitive functioning, physical and mental health, and healthy sexual development.”Paris Hilton Selling Cheeseburgers

The report also included links to several helpful websites including the following:


Posted in Entertainment, Marketing, Uncategorized | 16 Comments »

Situational Sources of Evil – Part II

Posted by Philip Zimbardo on February 23, 2007

Milgram’s LearnerThis is the second of a multi-part series of blog postings on the situational sources of evil. Parts of the series, including this post, are taken from an article in the most recent Yale Alumni magazine, which was adapted from my forthcoming book, The Lucifer Effect: Understanding How Good People Turn Evil (Random House, March 2007). My first post summarized Stanley Milgram’s famous obedience experiments and some of the other related, more recent studies that it inspired. Milgram’s experiments were among the first to demonstrate in dramatic and disturbing fashion the extent to which the situational forces often unexpectedly overwhelm the sort of dispositional forces that most of us assume are moving us.

* * *

This post picks up there and asks the question that must be posed of all such research: what is its external validity, what are real-world parallels to the laboratory demonstration of authority power?

Hannah ArendtIn 1963, the social philosopher Hannah Arendt published what was to become a classic of our times, Eichmann in Jerusalem: A Report on the Banality of Evil. She provides a detailed analysis of the war crimes trial of Adolf Eichmann, the Nazi figure who personally arranged for the murder of millions of Jews. Eichmann’s defense of his actions was similar to the testimony of other Nazi leaders: “I was only following orders.” What is most striking in Arendt’s account of Eichmann is all the ways in which he seemed absolutely ordinary: half a dozen psychiatrists had certified him as “normal.” Arendt’s famous conclusion: “The trouble with Eichmann was precisely that so many were like him, and that the many were neither perverted nor sadistic, that they were, and still are, terribly and terrifyingly normal.”

Arendt’s phrase “the banality of evil” continues to resonate because genocide has been unleashed around the world and torture and terrorism continue to be common features of our global landscape. A few years ago, the sociologist and Brazil expert Martha Huggins, the Greek psychologist and torture expert Mika Haritos-Fatouros, and I interviewed several dozen torturers. These men did their daily dirty deeds for years in Brazil as policemen, sanctioned by the government to get confessions by torturing “subversive” enemies of the state.Voilence Workers

The systematic torture by men of their fellow men and women represents one of the darkest sides of human nature. Surely, my colleagues and I reasoned, here was a place where dispositional evil would be manifest. The torturers shared a common enemy: men, women, and children who, though citizens of their state, even neighbors, were declared by “the System” to be threats to the country’s national security — as socialists and Communists. Some had to be eliminated efficiently, while others, who might hold secret information, had to be made to yield it up by torture, confess and then be killed.

Torture always involves a personal relationship; it is essential for the torturer to understand what kind of torture to employ, what intensity of torture to use on a certain person at a certain time. Wrong kind or too little — no confession. Too much — the victim dies before confessing. In either case, the torturer fails to deliver the goods and incurs the wrath of the senior officers. Learning to determine the right kind and degree of torture that yields up the desired information elicits abounding rewards and flowing praise from one’s superiors. It took time and emerging insights into human weaknesses for these torturers to become adept at their craft.

What kind of men could do such deeds? Did they need to rely on sadistic impulses and a history of sociopathic life experiences to rip and tear the flesh of fellow beings day in and day out for years on end?

In a recent study of 400 al-Qaeda members, 90% came from caring, intact families.

We found that sadists are selected out of the training process by trainers because they are not controllable. They get off on the pleasure of inflicting pain, and thus do not sustain the focus on the goal of extracting confessions. From all the evidence we could muster, torturers were not unusual or deviant in any way prior to practicing their new roles, nor were there any persisting deviant tendencies or pathologies among any of them in the years following their work as torturers and executioners. Their transformation was entirely explainable as being the consequence of a number of situational and systemic factors, such as the training they were given to play this new role; their group camaraderie; acceptance of the national security ideology; and their learned belief in socialists and Communists as enemies of their state.

Young Bin Laden

Amazingly, the transformation of these men into violence workers is comparable to the transformation of young Palestinians into suicide bombers intent on killing innocent Israeli civilians. In a recent study, the forensic psychiatrist Marc Sageman [at the Solomon Asch Center] found evidence of the normalcy of 400 al-Qaeda members. Three-quarters came from the upper or middle class. Ninety percent came from caring, intact families. Two-thirds had gone to college; two-thirds were married; and most had children and jobs in science and engineering. In many ways, Sageman concludes, “these are the best and brightest of their society.”

Israeli psychologist Ariel Merari, who has studied this phenomenon extensively for many years, outlines the common steps on the path to these explosive deaths. First, senior members of an extremist group identify young people who, based on their declarations at a public rally against Israel or their support of some Islamic cause or Palestinian action, appear to have an intense patriotic fervor. Next, they are invited to discuss how seriously they love their country and hate Israel. They are asked to commit to being trained. Those who do then become part of a small secret cell of three to five youths. From their elders, they learn bomb making, disguise, and selecting and timing targets. Finally, they make public their private commitment by making a videotape, declaring themselves to be “the living martyr” for Islam. The recruits are also told the Big Lie: their relatives will be entitled to a high place in Heaven, and they themselves will earn a place beside Allah. Of course, the rhetoric of dehumanization serves to deny the humanity and innocence of their victims.Suicide Bombers

The die is cast; their minds have been carefully prepared to do what is ordinarily unthinkable. In these systematic ways a host of normal, angry young men and women become transformed into true believers. The suicide, the murder, of any young person is a gash in the fabric of the human family that we elders from every nation must unite to prevent. To encourage the sacrifice of youth for the sake of advancing the ideologies of the old must be considered a form of evil that transcends local politics and expedient strategies.

A host of normal, angry young men and women become transformed into true believers.

Our final extension of the social psychology of evil from artificial laboratory experiments to real-world contexts comes to us from the jungles of Guyana. There, on November 28, 1978, an American religious leader persuaded more than 900 of his followers to commit mass suicide. In the ultimate test of blind obedience to authority, many of them killed their children on his command.

Jim JonesJim Jones, the pastor of Peoples Temple congregations in San Francisco and Los Angeles, had set out to create a socialist utopia in Guyana. But over time Jones was transformed from the caring, spiritual “father” of a large Protestant congregation into an Angel of Death. He instituted extended forced labor, armed guards, semistarvation diets, and daily punishments amounting to torture for the slightest breach of any of his many rules. Concerned relatives convinced a congressman and media crew to inspect the compound. But Jones arranged for them to be murdered as they left. He then gathered almost all the members at the compound and gave a long speech in which he exhorted them to take their lives by drinking cyanide-laced Kool-Aid.

Jones was surely an egomaniac; he had all of his speeches and proclamations, even his torture sessions, tape-recorded — including his final suicide harangue. In it Jones distorts, lies, pleads, makes false analogies, appeals to ideology and to transcendent future life, and outright insists that his orders be followed, all while his staff is efficiently distributing deadly poison to the hundreds gathered around him. Some excerpts from that last hour convey a sense of the death-dealing tactics he used to induce total obedience to an authority gone mad:

Please get us some medication. It’s simple. It’s simple. There’s no convulsions with it. [Of course there are, especially for the children.] . . . Don’t be afraid to die. You’ll see, there’ll be a few people land[ing] out here. They’ll torture some of our children here. They’ll torture our people. They’ll torture our seniors. We cannot have this. . . . Please, can we hasten? Can we hasten with that medication? . . . We’ve lived — we’ve lived as no other people lived and loved. We’ve had as much of this world as you’re gonna get. Let’s just be done with it. (Applause.). . . . Who wants to go with their child has a right to go with their child. I think it’s humane. . . . Lay down your life with dignity. Don’t lay down with tears and agony. There’s nothing to death. . . . It’s just stepping over to another plane. Don’t beJonestown Massacre this way. Stop this hysterics. . . . Look, children, it’s just something to put you to rest. Oh, God. (Children crying.). . . . Mother, Mother, Mother, Mother, Mother, please. Mother, please, please, please. Don’t — don’t do this. Don’t do this. Lay down your life with your child.

And they did, and they died for “Dad.”

A fitting conclusion comes from psychologist Mahrzarin Banaji: “What social psychology has given to an understanding of human nature is the discovery that forces larger than ourselves determine our mental life and our actions — chief among these forces [is] the power of the social situation.”

The most dramatic instances of directed behavior change and “mind control” are not the consequence of exotic forms of influence such as hypnosis, psychotropic drugs, or “brainwashing.” They are, rather, the systematic manipulation of the most mundane aspects of human nature over time in confining settings. Motives and needs that ordinarily serve us well can lead us astray when they are aroused, amplified, or manipulated by situational forces that we fail to recognize as potent. This is why evil is so pervasive. Its temptation is just a small turn away, a slight detour on the path of life, a blur in our sideview mirror, leading to disaster.

* * *

My next post will summarize ten significant lessons to be learned from the Milgram studies.

*****See also Part I and Part III of this Series.*****

Posted in Social Psychology, System Legitimacy | 4 Comments »

Internet Disinhibition: Is That Just the E-mail Talking?

Posted by Jon Hanson & Michael McCann on February 22, 2007

E-mail RageDaniel Goleman, author of the best-selling book “Social Intelligence: The New Science of Human Relationships,” has a thought-provoking essay in the New York Times on the “online disinhibition effect,” the tendency of the human brain to feel less restrained in online communication than in face-to-face or telephone communications (“Flame First, Think Later: New Clues to E-mail Misbehavior,” 2/20/2007).

While few of us knew it’s name, many of us are familiar with the effect: we tend to loosen up while e-mailing and occasionally type messages that we would be uncomfortable saying in person. Think of the “send” buttons you wished you’d never clicked or those responses that seemed, well, unresponsive. To be sure, miscommunication is inevitable in all contexts, but when the only medium conveying a message is digitized words, many of the subtle, situational features of conversation are absent.

Psychologists have identified several other reasons for the online disinhibition effect, including the exaggerated sense of self we experience while communicating alone and the time lag between sending an e-mail message and getting feedback. Persons who comment on messageboards tend to feel even less inhibited, covered as they are with Web pseudonyms. Adam Joinson, in a paper titled Causes and Implications of Disinhibited Behavior on the Internet, describes numerous situational cues that foster disinhibition.

New findings in social neuroscience shed light on the distinctions between in-person and internet communications. While the brain–and particularly the circuitry centered on the orbitofrontal cortex–reads “a continual cascade of emotional signs and social cues” in face-to-face interaction that help guide verbal responses, such information is lacking in on-line communication. We can hear a change in voice tone while speaking on the phone or see a smile turn to a frown or an eyebrow raise in mid-sentence while speaking in person (see related research on Nalini Ambady‘s “thin-slices of experience“). An e-mail provides few such cues — although there are some :). Even with such “emoticons” and “internet slang” (e.g., “lol”), they provide the orbitofrontal cortex comparatively little to work with. Without gestures, intonation, cadence, facial expressions, body language, and so on — only a tiny fragment of our “meaning” can be conveyed — and the possibility of misconstrual is heightened.

It should be noted that the norms and disinhibition of internet exchanges can be liberating. For individuals who are shy or socially awkward or those who need to choose their words carefully, for instance, the opportunity to communicate along fewer dimensions provides a freedom that face-to-face exchanges do not. Think of e-mail as alcohol without the buzz. In e-mail veritas.

It seems that evolution did not prepare the human brain for e-mail (or messageboards or maybe even blogs). That, combined with the fact that an increasing percentage of our communications occur on-line suggests some additional challenges for the law. If a “contract” is supposed to reflect the intent of the parties involved and “a meeting of their minds,” the (dispositionist) task of identifying and interpreting contracts may be more obviously challenging when the underlying communication is filtered through the net.

Regardless, the sort of hostilities that often lead to lawsuits seem likely to be enhanced because of e-mail. Consider, for example, a rather testy and well-publicized e-mail exchange that occurred between William A. Korman, a seasoned lawyer in Boston, and Dianna L. Abdala, a brand-new attorney, who was backing out of a committment to work at Korman’s lawfirm. The thrust of exchange was captured by Sacha Pfeiffer of the Boston Globe:

Korman was miffed that Abdala notified him by e-mail this month that, after tentatively agreeing to work at his law firm, she changed her mind. Her reason: “The pay you are offering would neither fulfill me nor support the lifestyle I am living.”

In his e-mail reply, Korman told Abdala that her decision not to have told him in person “smacks of immaturity and is quite unprofessional,” and noted that in anticipation of her arrival, he had ordered stationery and business cards for her, reformatted a computer, and set up an e-mail account. Nevertheless, he wrote, “I sincerely wish you the best of luck in your future endeavors.”

Her curt retort: ”A real lawyer would have put the contract into writing and not exercised any such reliance until he did so.”

His: “Thank you for the refresher course on contracts. This is not a bar exam question. You need to realize that this is a very small legal community, especially the criminal defense bar. Do you really want to start pissing off more experienced lawyers at this early stage of your career?”

Lawyers — gotta love us! But the source of the hostility reflected more than simply the lawyerly disposition to tangle and spit; the situation likely mattered too. It seems a little hard to imagine that the “Korman vs. Abdala” interchange would have taken the same tone had it occurred in person rather than electronically.

OWL, the On-Line Writing Lab at Purdue, provides these helpful suggestions for those of us who are prone to “flaming.”

Things to consider before venting in email:

  • Would I say this to this person’s face?
  • Am I putting the receiver in awkward position?no-fire-sign.jpg

  • How would I feel if I got this email message?

Usually, by the time you consider the above questions you will be calm enough to write your message with a different approach. Catching someone by surprise in a flaming message is a quick way to alienate your reader mainly because they will react with anger or embarrassment.

When it appears that a dialogue has turned into a conflict, it is best to suggest an end to the swapping of email and for you to talk or meet in person. If you receive a flaming email try to respond in a short and simple response. If that does not appease the flamer than make contact with him or her outside the virtual realm.

E-mail with care.

Posted in Life | 4 Comments »

Promoting Dispositionism through Entertainment – Part I

Posted by Jon Hanson & Michael McCann on February 20, 2007

This is the first of a series of posts exploring some of the ways the entertainment industry reinforces dominant (dispsitionist) conceptions of the human animal.

* * *

blue-background-small.jpgWikipedia defines the fundamental attribution error as “the tendency for people to over-emphasize dispositional, or personality-based, explanations for behaviors observed in others while under-emphasizing situational explanations.” Since identifying it in the late 1960s, social scientists have added significantly to their understanding of the gap between why we behave as we do and why we think we behave as we do. We think we make choices based on the confluence of internal forces including our thinking, our preferring, and our willing – add a dash of personality, a pinch of character, a splash of the supernatural, and you have the key components for human behavior.

Science indicates, quite to the contrary, that those components are widely shared fictions. After thousands of studies and experiments, what becomes clear is that that the fundamental attribution error understates the vastness of that gap between perception and reality. We humans (particularly of the American variety) are more or less clueless regarding what is moving us. That is, the “situational” forces are far more numerous and subtle than we ever imagined. Similarly, the dispositionist reasons we offer or conjure up generally reflect our attempts to spin or make sense of our actions. We give reasons in an effort to make “reasonable” what often isn’t. We are, to use the jargon, situational characters caught in a dispositionist mindset and culture.

Where does this gorge between who we are and who we imagine ourselves to be come from? That dispositionist person schema has countless sources – including its own self-fulfilling effects on perception and construal. But one of the key tributaries from which the dispositionist river of individualism, personality, character, and choice are fed is the entertainment industry.rocky-balboa1.jpg

The December and January holiday box office is illustrative, with two of the more popular films, Rocky Balboa ($60 million gross) and The Pursuit of Happyness ($124 million gross–10th highest among films released in 2006), furnishing classic dispositional themes. Indeed, as the Rocky Balboa website promo announces: “The Greatest Underdog Story of Our Time . . . Is Back for One Final Round.” Now in his 50s, Rocky overcomes age, and the doubts and advice of everyone he knows, respects, and loves to take on (and, in effect, beat) the far younger, faster, stronger heavyweight champion of the world. And how, you might ask, does he manage this impossible feat? The answer is simple: an unwillingness to be moved by situation or, put differently, a tenacious will and unflinching disposition. In a speech to his son, Rocky himself explains his success (and, by implication, the success and failure others) with these spirit-rousing words:

The world ain’t all sunshine and rainbows. It’s a very rough, mean place . . . and no matter how tough you think you are, it’ll always bring you to your knees and keep you there, permanently . . . if you let it. You or nobody ain’t never gonna hit as hard as life. But it ain’t about how hard you hit . . . it’s about how hard you can get hit, and keep moving forward . . . how much you can take, and keep moving forward. If you know what you’re worth, go out and get what you’re worth. But you gotta be willing to take the hit.

We have an unlimited appetite that inspirational, dispositionist message, and the entertainment industry doesn’t tire of serving it up. The Fox News reviewer calls that speech “The most poignant and ultimately important scene in this humble film.” “That speech alone is worth the admission to ‘Rocky Balboa’ and makes the conclusion to the 30-year journey that Stallone let us share in worth the wait.” That’s saying something, particularly when you remember some of the earlier movies in the series.

Not feeling sufficiently inspired? Take a stroll to screen number 7. There you will find Chris Gardner, in The Pursuit of Happyness, dishing out a heaping helping of the same message. Looking over the cityscape with his four(ish)-year-old son, he admonishes:

You got a dream, you gotta protect it. People can’t do something themselves, they wanna tell you that you can’t do it.You want something? Go get it. Period.

Nothing new here. Rocky powers through the steaming streets of Philadelphia while Gardner sprints the hilly boulevards of San Francisco. Balboa is figuratively hit by a truck (heavyweight champion Mason Dixon) in the ring, and gets up to keep fighting round after round. What drive! What a will! What a dixon-hits-balboa.jpgstrong jaw! Gardner is literally knocked out of his shoes by a car. Still, for the sake of his dream – becoming a stockbroker – he jumps up and runs shoeless (but otherwise apparently fine) back to the “highly competitive” internship program at Dean Witter, as if nothing happened. What commitment! What character! What strong socks!

Both characters earn their success with more than simply pit-bull determination. They use their heads and exploit special training techniques to outsmart their less driven, less hungry competitors. Rocky’s coach, “Duke,” sums it up this way:

Duke: To beat this guy, you need speed. You don’t have any. Your knees are weak so no hard running. You’ve got neck arthritis and calcium deposits in most of your joints, so sparring is out.

. . .

Duke: So what we’ll be callin’ on, is good old-fashioned blunt force trauma. Horse power. Heavy duty cast iron pile drivin’ punches that will have to hurt so much it’ll rattle his ancestors. Everytime you hit him with a shot, it’s got to feel like he tried kissing the express train.

Duke: [cracks his neck] Yeah! Let’s start building some hurtin’ bombs.

Cue the trumpets – “doo dotadoo dotadoo dotadoo dotadoo . . .” – and roll the clips of bulging arms and barrel chest heaving beer kegs and flinging Russian kettle bells. “Feeling strong now . . .”

Gardner is no less resourceful, though perhaps a little less inspiring. To maximize his success as a glorified, cold-calling phone solicitor, hepursuit-of-happyness2.jpg discovers he can save eight minutes per day by not hanging up the phone (vintage, 1970s) on the receiver – while his less motivated cohorts piss away seconds per call by lazily placing the handset onto the phone stirrups and then lifting the handset again. How does he pull this ingenious trick off? Gardner calls on his index finger to press the disconnect button while the handset stays firmly ensconced between shoulder and ear. Talk about using your head.

From wash-up to Heavyweight Champion of the World to “has been” and back. From broke and unpaid phone solicitor to multimillionaire stockbroker. Rock bottom to American dream. Rags to riches! Please tell me another.

Turns out, a “feel good” movie is one that assures us that we are who we want to believe we are — in control of our destiny no matter our situation. You want something? Go out and get what you’re worth. Go get it. Period.

* * *

The next post in this series will explore whether, if anything, the message of those movies has anything to do with how we conceive of law and policy.

*****See also Part II of this Series.*****

Posted in Entertainment, Life | 6 Comments »

Situational Sources of Evil – Part I

Posted by Philip Zimbardo on February 16, 2007

Yale Alumni Magazine Logo

In the January/February 2007 edition of the Yale Alumni Magazine, I published an article revisiting Stanley Milgram’s famous obedience experiments. That article itself is adapted from my forthcoming book, The Lucifer Effect: Understanding How Good People Turn Evil (Random House, March 2007). For The Situationist, I am breaking that article into several bite-sized posts as part of a larger series of posts on the Situational Sources of Evil.

* * *

Imagine that you have responded to an advertisement in the New Haven newspaper seeking subjects for a study of memory. AMilgram Advertisement researcher whose serious demeanor and laboratory coat convey scientific importance greets you and another applicant at your arrival at a Yale laboratory in Linsly-Chittenden Hall. You are here to help science find ways to improve people’s learning and memory through the use of punishment. The researcher tells you why this work may have important consequences. The task is straightforward: one of you will be the “teacher” who gives the “learner” a set of word pairings to memorize. During the test, the teacher will give each key word, and the learner must respond with the correct association. When the learner is right, the teacher gives a verbal reward, such as “Good” or “That’s right.” When the learner is wrong, the teacher is to press a lever on an impressive-looking apparatus that delivers an immediate shock to punish the error.

Milgram Shock BoxThe shock generator has 30 switches, starting from a low level of 15 volts and increasing by 15 volts to each higher level. The experimenter tells you that every time the learner makes a mistake, you have to press the next switch. The control panel shows both the voltage of each switch and a description. The tenth level (150 volts) is “Strong Shock”; the 17th level (255 volts) is “Intense Shock”; the 25th level (375 volts) is “Danger, Severe Shock.” At the 29th and 30th levels (435 and 450 volts) the control panel is marked simply with an ominous XXX: the pornography of ultimate pain and power.

You and another volunteer draw straws to see who will play each role; you are to be the teacher, and the other volunteer will be the learner. He is a mild-mannered, middle-aged man whom you help escort to the next chamber. “Okay, now we are going to set up the learner so he can get some punishment,” the experimenter tells you both. The learner’s arms are strapped down and an electrode is attached to his right wrist.“Learner” being strapped in The generator in the next room will deliver the shocks. The two of you communicate over an intercom, with the experimenter standing next to you. You get a sample shock of 45 volts — the third level, a slight tingly pain — so you have a sense of what the shock levels mean. The researcher then signals you to start.

Initially, your pupil does well, but soon he begins making errors, and you start pressing the shock switches. He complains that the shocks are starting to hurt. You look at the experimenter, who nods to continue. As the shock levels increase in intensity, so do the learner’s screams, saying he does not think he wants to continue. You hesitate and question whether you should go on. But the experimenter insists that you have no choice.

James Monroe High SchoolIn 1949, seated next to me in senior class at James Monroe High School in the Bronx, New York, was my classmate, Stanley Milgram. We were both skinny kids, full of ambition and a desire to make something of ourselves, so that we might escape life in the confines of our ghetto experience. Stanley was the little smart one who we went to for authoritative answers. I was the tall popular one, the smiling guy other kids would go to for social advice.

I had just returned to Monroe High from a horrible year at North Hollywood High School, where I had been shunned and friendless (because, as I later learned, there was a rumor circulating that I was from a New York Sicilian Mafia family). Back at Monroe, I would be chosen “Jimmy Monroe” — most popular boy in Monroe High School’s senior class. Stanley and I once discussed how that transformation could happen. We agreed that I had not changed; the situation was what mattered.

Situational psychology is the study of the human response to features of our social environment, the external behavioral context, above all to the other people around us. Stanley Milgram and I, budding situationists in 1949, both went on to become academic social psychologists. We met again at Yale in 1960 as beginning assistant professors — him starting out at Yale, me at NYU. Some of Milgram’s new research wasStanley Milgram conducted in a modified laboratory that I had fabricated a few years earlier as a graduate student — in the basement of Linsly-Chittenden, the building where we taught Introductory Psychology courses. That is where Milgram was to conduct his classic and controversial experiments on blind obedience to authority.

Milgram’s interest in the problem of obedience came from deep personal concerns about how readily the Nazis had obediently killed Jews during the Holocaust. His laboratory paradigm, he wrote years later, “gave scientific expression to a more general concern about authority, a concern forced upon members of my generation, in particular upon Jews such as myself, by the atrocities of World War II.”

As Milgram described it, he hit upon the concept for his experiment while musing about a study in which one of his professors, Solomon Asch, had tested how far subjects would conform to the judgment of a group. Asch had put each subject in a group of coached confederates and asked every member, one by one, to compare a set of lines in order of length. When the confederates all started giving the same obviously false answers, 70 percent of the subjects agreed with them at least some of the time.

Milgram wondered whether there was a way to craft a conformity experiment that would be “more humanly significant” than judgments about line length. He wrote later: “I wondered whether groups could pressure a person into performing an act whose human import was more readily apparent; perhaps behaving aggressively toward another person, say by administering increasingly severe shocks to him. But to study the group effect . . . you’d have to know how the subject performed without any group pressure. At that instant, my thought shifted, zeroing in on this experimental control. Just how far would a person go under the experimenter’s orders?”

How far up the scale do you predict that you would go under those orders? Put yourself back in the basement with the fake shock apparatus and the other “volunteer” — actually the experimenter’s confederate, who always plays the learner because the “drawing” is rigged — strapped down in the next room. As the shocks proceed, the learner begins complaining about his heart condition. You dissent, but the experimenter still insists that you continue. The learner makes errors galore. You plead with your pupil to concentrate; you don’t want to hurt him. But your concerns and motivational messages are to no avail. He gets the answers wrong again and again. As the shocks intensify, he shouts out, “I can’t stand the pain, let me out of here!” Then he says toMilgram’s Subject 1 the experimenter, “You have no right to keep me here!” Another level up, he screams, “I absolutely refuse to answer any more! You can’t hold me here! My heart’s bothering me!”

Obviously you want nothing more to do with this experiment. You tell the experimenter that you refuse to continue. You are not the kind of person who harms other people in this way. You want out. But the experimenter continues to insist that you go on. He reminds you of the contract, of your agreement to participate fully. Moreover, he claims responsibility for the consequences of your shocking actions. After you press the 300-volt switch, you read the next keyword, but the learner doesn’t answer. “He’s not responding,” you tell the experimenter. You want him to go into the other room and check on the learner to see if he is all right. The experimenter is impassive; he is not going to check on the learner. Instead he tells you, “If the learner doesn’t answer in a reasonable time, about five seconds, consider it wrong,” since errors of omission must be punished in the same way as errors of commission — that is a rule.

As you continue up to even more dangerous shock levels, there is no sound coming from your pupil’s shock chamber. He may be unconscious or worse. You are truly disturbed and want to quit, but nothing you say works to get your exit from this unexpectedly distressing situation. You are told to follow the rules and keep posing the test items and shocking the errors.

Now try to imagine fully what your participation as the teacher would be. If you actuallyMilgram’s Subject 2 go all the way to the last of the shock levels, the experimenter will insist that you repeat that XXX switch two more times. I am sure you are saying, “No way would I ever go all the way!” Obviously, you would have dissented, then disobeyed and just walked out. You would never sell out your morality. Right?

Milgram once described his shock experiment to a group of 40 psychiatrists and asked them to estimate the percentage of American citizens who would go to each of the 30 levels in the experiment. On average, they predicted that less than 1 percent would go all the way to the end, that only sadists would engage in such sadistic behavior, and that most people would drop out at the tenth level of 150 volts. They could not have been more wrong.

In Milgram’s experiment, two of every three (65 percent) of the volunteers went all the way up to the maximum shock level of 450 volts. The vast majority of people shocked the victim over and over again despite his increasingly desperate pleas to stop. Most participants dissented from time to time and said they did not want to go on, but the researcher would prod them to continue.

Over the course of a year, Milgram carried out 19 different experiments, each one a different variation of the basic paradigm. In each of these studies he varied one social psychological variable and observed its impact. In one study, he added women; in others he varied the physical proximity or remoteness of either the experimenter-teacher link or the teacher-learner link; had peers rebel or obey before the teacher had the chance to begin; and more.

Milgram’s Bridgeport LocationIn one set of experiments, Milgram wanted to show that his results were not due to the authority power of Yale University. So he transplanted his laboratory to a run-down office building in downtown Bridgeport, Connecticut, and repeated the experiment as a project ostensibly of a private research firm with no connection to Yale. It made hardly any difference; the participants fell under the same spell of this situational power.

The data clearly revealed the extreme pliability of human nature: depending on the situation, almost everyone could be totally obedient or almost everyone could resist authority pressures. Milgram was able to demonstrate that compliance rates could soar to over 90 percent of people continuing to the 450-volt maximum or be reduced to less than 10 percent — by introducing just one crucial variable into the compliance recipe.

Want maximum obedience? Make the subject a member of a “teaching team,” in which the job of pulling the shock lever to punish the victim is given to another person (a confederate), while the subject assists with other parts of the procedure. Want resistance to authority pressures? Provide social models — peers who rebel. Participants also refused to deliver the shocks if the learner said he wanted to be shocked; that’s masochistic, and they are not sadists. They were also reluctant to give high levels of shock when the experimenter filled in as the learner. They were more likely to shock when the learner was remote than in proximity.

In each of the other variations on this diverse range of ordinary American citizens, of widely varying ages and occupations and of both genders, it was possible to elicit low, medium, or high levels of compliant obedience with a flick of the situational switch. Milgram’s large sample — a thousand ordinary citizens from varied backgrounds — makes the results of his obedience studies among the most generalizable in all the social sciences. His classic study has been replicated and extended by many other researchers in many countries.

Recently, Thomas Blass of the University of Maryland-Baltimore County [author of The Man Who Shocked The World and creator of the terrific website StanleyMilgram.Com] analyzed the rates of obedience in eight studies conducted in the United States and nine replications in European, African, and Asian countries. He found comparably high levels of compliance in all. The 61 percent mean obedience rate found in the U.S. was matched by the 66 percent rate found across all the other national samples. The degree of obedience was not affected by the timing of the studies, which ranged from 1963 to 1985.

Other studies based on Milgram’s have shown how powerful the obedience effect can be when legitimate authorities exercise their power within their power domains. In one Nurse Ratchedstudy, most college students administered shocks to whimpering puppies when required to do so by a professor. In another, all but one of 22 nurses flouted their hospital’s procedure by obeying a phone order from an unknown doctor to administer an excessive amount of a drug (actually a placebo); that solitary disobedient nurse should have been given a raise and a hero’s medal. In still another, a group of 20 high school students joined a history teacher’s supposed authoritarian political movement, and within a week had expelled their fellows from class and recruited nearly 200 others from around the school to the cause.

Now we ask the question that must be posed of all such research: what is its external validity, what are real-world parallels to the laboratory demonstration of authority power?

* * *

That question I will turn to in Part II of this series (to be posted next week).

*****See also Part II and Part III of this Series.*****


Posted in Social Psychology | 38 Comments »

Black History is Now

Posted by Jon Hanson & Michael McCann on February 14, 2007

Norman Rockwell School DesgregationWhen many Americans think about Black History Month, the operative word is “history.” They take what might be called a”history was then” perspective.

For that group, February is a time to remember and regret how bad things used to be and to celebrate a few of the household-name heroes who helped expose and reform the deep and blatant racism of our past. Though some of the racist practices are still quite fresh — gruesome lynchings, hateful Ku Klux Klan rituals, law enforcement officers unlatching fire hoses, unharnessing billyclubs, and unleashing police dogs on civil rights activists — the “history was then” crowd finds those images both disgusting and reassuring. Look how far we’ve come.

One psychology experiment — commonly called the “doll test” — famously illustrates those bad-old days. In the mid-1950s the wife-and-husband team, Kenneth and Mamie Clark, asked black children, ages three to seven, a series of questions about some plastic baby dolls that were identical except for color. The responses brought a latent reality into black-and-white relief. Ten of sixteen of the young black children preferred the white dolls to the black dolls. Furthermore, they attributed more positive characteristics (e.g., “good” and “nice”) to the white dolls. The Clarks concluded that “prejudice, discrimination, and segregation” caused black children to develop a sense of inferiority and self-hatred.

Kenneth Clark Conducting Doll Test

Soon thereafter, those disturbing findings would help to shift public policy and interpretation of the law. Indeed, they figured prominently in several legal cases, the most important of which was Brown v. Board of Education of Topeka, 347 U.S. 483 (1954).

In that landmark case, which examined Jim Crow’s “separate but equal” public-school education system, the U.S. Supreme Court was presented with this fundamental question: “Does segregation of children in public schools solely on the basis of race, even though the physical facilities and other ‘tangible’ factors may be equal, deprive the children of the minority group of equal educational opportunities?” The Justices’ unanimous answer of “yes” depended significantly on the argument that state-enforced separation denotes inferiority which is internalized by the students to their detriment. The doll test was offered as evidence of that dynamic.

The “history was then” contingent would, we suspect, gasp at the results of the doll test and then point to Brown v. Board as evidence of how the laws, not to mention our customs and sensibilities, have all changed since 1954.

That was then, and this now. Look whose running for President! Look whose coaching head-to-head in the Super Bowl! Look at Oprah and Tiger — they are so famous and beloved, not to mention wealthy, we can dispense with their last names. No longer is “black” somehow “inferior.” The doll test is interesting today as a milemarker of the distance we’ve come in just fifty years.

But there is another, less common way to view history. Some look to it as if it were a mirror — a means of getting a different and potentially more accurate perspective on ourselves today. The presumption of this “history is now” approach is simple: the shortcomings of those who went before us are probably to be found in one form or another in ourselves. As Arthur Lovejoy put it: “The adequate record of even the confusions of our forebears may help, not only to clarify those confusions, but to engender a salutary doubt whether we are wholly immune from different but equally great confusions.”

Kiri DavisKiri Davis, a seventeen-year-old student at Manhattan’s Urban Academy, falls into the “history is now” camp. Like Kenneth and Mamie Clark five decades earlier, Davis has publicly documented how progress is often more imagined than real. In her award-winning documentary, “A Girl Like Me (2006),” Davis recorded how she duplicated the Clarks’ doll test among young girls in Harlem.

Davis became interested in the project–which has drawn the praise of many, including Deborah Archer over at BlackProf–when she, like many teenagers, came to understand that beauty is not only skin deep, it is tone sensitive.

Davis didn’t come to that hypothesis through reading the relevant social scientific literature, which we will touch on below. Small talk among friends made explicit what they all knew without asking: the lighter and whiter a person’s skin tone, and the straighter and “less kinky” her hair, the more attractive she is. Davis was deeply discouraged by these conversations, but realized that pressing the point would be difficult because the subject of “blackness” is often, as she put it, “too taboo to talk about,” even in the black community. She also observed that trying to dissuade her friends of their thinking was fruitless since, “[y]ou could tell these people about the standards of beauty that are forced on young girls all you want to. But they won’t get it until you show them.”

So, “show[ing] them” was what she set out to do. Davis replicated Kenneth and Mamie Clark’s famous study, and observed the very same behavior in her subjects: a majority of young black girls regarding white dolls as prettier and more likeable than black dolls. History is now.

No doubt cases like Brown v. Board of Education and the civil rights movement generally represented significant strides toward a more just and equitable society. Unfortunately however, much of the progress was either short-lived or illusory. Consider this excerpt from Davis’ study, as reported by Hazel Trice Edney of the Baltimore Times:

The reassuring female voice asks the child a

Kiri Davis’s Doll Test

question: “Can you show me the doll that looks bad?”

The child, a preschool-aged Black girl, quickly picks up and shows the Black doll over a White one that is identical in every respect except complexion.

“And why does that look bad?

“Because she’s Black,” the little girl answers emphatically.

“And why is this the nice doll?” the voice continues.

“Because she’s White.”

“And can you give me the doll that looks like you?”

The little girl hesitates for a split second before handing over the Black doll that she has just designated as the uglier one.

Unfortunately, Kiri Davis’s compelling video is barely a drop in the bucket of evidence that social scientists have amassed indicating the continued influence of robust race-based stereotypes and prejudices. Worse still, the evidence is that those beliefs and feelings are not limited to children’s attitudes toward dolls or to the beauty sense of teenagers. They are ubiquitous. And here may be the worst part: whatever our conscious or explicit attitudes and intentions, today’s social psychologists have discovered a set of biases that operate beneath the radar of those salient, accessible, and misleading cognitive features.

Such attitudes, sometimes called “implicit associations,” have been uncovered, for instance, through an internet-based experiment called the implicit association test (or IAT) that in some ways resembles the Clarks’ famous doll test — only in a way that does not defer to our express attitudes. (Among others, two Contributors to The Situationist, Mahzarin Banaji and Brian Nozek, have been integral in developing the methodology and analyzing the meaning of its results. And among legal scholars, two other Contributors to The Situationist, Jerry Kang and Linda Hamilton Krieger, have been especially active in exploring the possible implications of those results for particular areas of law.)

implicit_association_image.jpgIn the Race IAT, subjects take a timed test in which they are shown a computer screen and asked to match positive words (love, wonderful, peace) or negative words (evil, nasty, failure) with faces of African-Americans or Whites. Very roughly, subjects who take less time to link positive words with Whites and more time to link positive words with Blacks—or who are quicker at connecting negative words with Blacks and slower at connecting negative words with Whites—demonstrate an implicit bias for white faces or against Blacks. You can take the test yourself by clicking here. Millions of people have. And, among other findings, the IAT test reveals that approximately three-quarters of White subjects and half of the Black subjects show such a bias. Think of this as the post-PC example of the doll test. History is now.

But wait a minute. If the bias is only implicit and subconscious, how important can it be? Here, too, the news is bad. Although the research is by now piled high and the findings, at time complex, the results can be fairly summarized as follows: implicit bias influences behavior in the way that we assume (often incorrectly) explicit attitudes do. Put differently, the “attitudes” that we do not perceive in ourselves are often more powerful in shaping our conduct than are the attitudes of which we are conscious — situation eclipses disposition.

Other research illustrates that the distinctions made among various shades of “gray,” the issue motivating Kiri Davis, have important behavioral consequences for adults. For instance, Elizabeth Klonoff and Hope Landrine‘s study “Is Skin Color a Marker for Racial Discrimination” found that “dark-skinned Blacks were 11 times more likely to experience frequent racial discrimination than their light-skinned counterparts.” Similarly, Rodolfo Espino and Michael M. Franz‘s study “Latino Phenotypic Discrimination Revisited: The Impact of Skin Color on Occupational Status” finds that “dark-skinned Mexican Americans and Cuban Americans continue to face higher levels of discrimination in the labor market.”

One recent, remarkable study strikes us as particularly revealing regarding the life-and-death significance of “blackness.” Jennifer Eberhardt, Paul Davies, Valerie Purdie-Vaughns, and Sheri Lynn Johnson, found a disturbing correlation between how prototypically “black” a death-eligible criminal defendant was and whether that defendant was sentenced to death. Eberhardt’s website summarizes the research this way: lifedeathillustration-image.jpg

The vast majority of studies designed to examine the influence of race in capital punishment have found that murderers of White victims are much more likely than murderers of Black victims to be sentenced to death. Drawing from an extensive database compiled by David Baldus, she and her colleagues obtained the photographs of Black defendants who were death eligible in Philadelphia from 1979 to 1999. She presented the faces of Black defendants who had killed White victims to naive participants (who did not know that the photographs depicted convicted murders) and asked them to rate each face on how stereotypically Black it appeared. The effect of stereotypicality was clear. Whereas only 24% of the defendants rated as less stereotypically Black received a death sentence, 58% of the defendants rated as more stereotypically Black received a death sentence. This stereotypicality effect was significant even when controlling for defendant attractiveness and the most significant non-racial factors known to influence sentencing (i.e., aggravating or mitigating circumstances, murder severity, defendant socioeconomic status, and victim socioeconomic status).

It seems that jurors view criminal defendants in very much the same way that young children see dolls. White is good. Black is bad. Very black is very bad.

To paraphrase Cornell West, race still matters and history is a fundamental lens for seeing what we would otherwise like to deny. In West’s words, “[a] fully functional multiracial society cannot be achieved without a sense of history and open, honest dialogue.” History is now.

(Later this month, we will come back to this topic in another post to explore other differences and tensions that seem to exist between the “black history is then” and the “black history is now” camps. To view articles that we have written on the connection of historical racial disparities to current racial disparities, click here and here.)

Posted in History, Implicit Associations, System Legitimacy | 17 Comments »

Conference on Law & Mind Sciences

Posted by The Situationist Staff on February 10, 2007


Invitations (pictured here) to the upcoming Conference on Law and Mind Sciences were mailed out this week. The conference will introduce to lawyers, law students, and legal theorists some of the key discoveries and insights of social psychology, social cognition, and related fields regarding the purposes, motives, and consequences of law. The conference will bring together some of the country’s most distinguished social psychologists and legal academics and will include both the presentation of research by psychologists and a discussion of that research with legal scholars. It will be held on March 10, 2007 in Austin Hall at Harvard Law School.


To register or to learn more details, go to the Conference Tab of the website for The Project of Law and Mind Sciences at Harvard Law School.

Posted in Events, Legal Theory, Social Psychology | Leave a Comment »

The Heat is On

Posted by The Situationist Staff on February 9, 2007


Last week, the United Nations made public the fourth IPCC (Intergovernmental Panel on Climate Change) report on climate-change. The report, collaboratively prepared by many of the world’s most authoritative climate scientists concluded that there is at least a 90 percent chance that we humans are the cause of current increasing temperatures around the planet. Among other consequences, that temperature rise is likely to cause an increase in sea levels of between seven and twenty-three inches, contribute significantly to the disappearing alpine glaciers, and lead to still more severe weather events.

Viewed in context, that’s the good news. The bad news is that our goose is already cooking, and the longer-term consequences of our environmental predicament seem dire and inevitable. It as if we have dialed up the oven and have no way to dial it back down. Whatever we do today, the heat will linger and the bird will continue to roast. Like it or not, the heat is on.

This is a challenge whose solution requires that we think and act in new ways across unfamiliar time frames, borders, languages, and cultures. Forget short-term solutions. Any hope for improvement calls for immediate and dramatic changes that, at best, will yield only gradual and ambiguous results. Improvements will be measured not in years or even decades but in lifetimes.


The IPCC’s science-based conclusions and the visibly dramatic weather-related anomalies of the last few years are the sort of thing that are finally getting through to the public and undermining the credibility of doubt-mongerers. But why has it taken so long to get this far? How has it happened that we humans have been for so long obliviously turning up the heat in our own oven? And why aren’t we taking dramatic steps to begin addressing the causes of this planet-altering and species-threatening tragedy?

Energy Secretary Samuel Bodman claims that the law of “unintended consequences” cautions against enacting significant reform any time soon. That sounds sensible until one remembers that just as regulatory actions can have unintended consequences, so can regulatory omissions. After all, the long-term absence of meaningful regulation seems likely to have contributed to the unintended climate change. Something else is contributing to our sluggish and half-hearted reaction. Social psychology and related mind sciences have much to teach us about what that something else is.

We humans think we see clearly all relevant causal factors, though many are all but invisible to us. We have a difficult time identifying, much less monitoring, much less understanding slow and non-salient changes in our environments. We have an aversion for complexity and unanswered questions. We have trouble connecting bad outcomes with benign intentions. We are motivated to deny the existence of threats to our system or to alter our behavior from that to which we have grown accustomed. Those motivations and others are easily exploited by wealthy powerful industries that have made it their business to criticize, mock, and otherwise raise doubts about the emerging scientific consensus. But wait! There’s more! Daniel Gilbert

Below you will find a terrific essay by Dan Gilbert, who has generously agreed to share some of his short, popular writings on The Situationist. (Thanks, Dan.) This piece reveals how some of our psychological proclivities and vulnerabilities have contributed to our environmental predicament. If you haven’t read his work before, you are in for a special treat. Dan is clever, eloquent, and hilarious. But, be warned, his message is also quite sobering.


If Only Gay Sex Caused Global Warming
Los Angeles Times, July 2, 2006

No one seems to care about the upcoming attack on the World Trade Center site. Why? Because it won’t involve villains with box cutters. Instead, it will involve melting ice sheets that swell the oceans and turn that particular block of lower Manhattan into an aquarium.

The odds of this happening in the next few decades are better than the odds that a disgruntled Saudi will sneak onto an airplane and detonate a shoe bomb. And yet our government will spend billions of dollars this year to prevent global terrorism and … well, essentially nothing to prevent global warming.

Why are we less worried about the more likely disaster? Because the human brain evolved to respond to threats that have four features — features that terrorism has and that global warming lacks.

First, global warming lacks a mustache. No, really. We are social mammals whose brains are highly specialized for thinking about others. Understanding what others are up to — what they know and want, what they are doing and planning — has been so crucial to the survival of our species that our brains have developed an obsession with all things human. We think about people and their intentions; talk about them; look for and remember them.

That’s why we worry more about anthrax (with an annual death toll of roughly zero) than influenza (with an annual death toll of a quarter-million to a half-million people). Influenza is a natural accident, anthrax is an intentional action, and the smallest action captures our attention in a way that the largest accident doesn’t. If two airplanes had been hit by lightning and crashed into a New York skyscraper, few of us would be able to name the date on which it happened.

Global warming isn’t trying to kill us, and that’s a shame. If climate change had been visited on us by a brutal dictator or an evil empire, the war on warming would be this nation’s top priority.

The second reason why global warming doesn’t put our brains on orange alert is that it doesn’t violate our moral sensibilities. It doesn’t cause our blood to boil (at least not figuratively) because it doesn’t force us to entertain thoughts that we find indecent, impious or repulsive. When people feel insulted or disgusted, they generally do something about it, such as whacking each other over the head, or voting. Moral emotions are the brain’s call to action.

Although all human societies have moral rules about food and sex, none has a moral rule about atmospheric chemistry. And so we are outraged about every breach of protocol except Kyoto. Yes, global warming is bad, but it doesn’t make us feel nauseated or angry or disgraced, and thus we don’t feel compelled to rail against it as we do against other momentous threats to our species, such as flag burning. The fact is that if climate change were caused by gay sex, or by the practice of eating kittens, millions of protesters would be massing in the streets.

The third reason why global warming doesn’t trigger our concern is that we see it as a threat to our futures — not our afternoons. Like all animals, people are quick to respond to clear and present danger, which is why it takes us just a few milliseconds to duck when a wayward baseball comes speeding toward our eyes.

The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That’s what brains did for several hundred million years — and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.

global warming from ability to duck that which is not yet coming is one of the brain’s most stunning innovations, and we wouldn’t have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.

We haven’t quite gotten the knack of treating the future like the present it will soon become because we’ve only been practicing for a few million years. If global warming took out an eye every now and then, OSHA would regulate it into nonexistence.

There is a fourth reason why we just can’t seem to get worked up about global warming. The human brain is exquisitely sensitive to changes in light, sound, temperature, pressure, size, weight and just about everything else. But if the rate of change is slow enough, the change will go undetected. If the low hum of a refrigerator were to increase in pitch over the course of several weeks, the appliance could be singing soprano by the end of the month and no one would be the wiser.

Because we barely notice changes that happen gradually, we accept gradual changes that we would reject if they happened abruptly. The density of Los Angeles traffic has increased dramatically in the last few decades, and citizens have tolerated it with only the obligatory grumbling. Had that change happened on a single day last summer, Angelenos would have shut down the city, called in the National Guard and lynched every politician they could get their hands on.

Environmentalists despair that global warming is happening so fast. In fact, it isn’t happening fast enough. If President Bush could jump in a time machine and experience a single day in 2056, he’d return to the present shocked and awed, prepared to do anything it took to solve the problem..

The human brain is a remarkable device that was designed to rise to special occasions. We are the progeny of people who hunted and gathered, whose lives were brief and whose greatest threat was a man with a stick. When terrorists attack, we respond with crushing force and firm resolve, just as our ancestors would have. Global warming is a deadly threat precisely because it fails to trip the brain’s alarm, leaving us soundly asleep in a burning bed.

It remains to be seen whether we can learn to rise to new occasions.Dan Gilbert’s Stumbling on Happiness

(To visit the Climate Change Project, click here. To watch a Stephen Colbert Report interview of Tim Flannery, author of The Weathermakers: The History and Future Impact of Climate Change, click here.)

(Dan Gilbert is also the author of best-selling book, “Stumbling on Happiness,” which The Situationist highly recommends. To read a New York Times review of “Stumbling,” click here.)

Posted in Deep Capture, Public Policy, Social Psychology, Uncategorized | 7 Comments »

Thinking the Situation into Legal Theory: The Promise of Experimental Parable

Posted by David Yosifon on February 8, 2007

In law – legal theory, practice, and education – we find that a powerful way to deepen our understanding of fundamental legal principles is to examine and review their operation in past cases. One of the great benefits of the case method is that it provides the legal community with a common reservoir of elaborate stories showing human lives intersecting with abstract principles, stories that we all draw on when talking (or aruging) with each other about how fundamental principles should be applied to the myriad of unmapped circumstances upon which legal thinking is called to attend. The cases that do this work are not necessarily the most “important” cases in the sense of being landmark or doctrinally innovative. Often the most powerful cases in legal discourse derive their potency – or their frequent use, anyway – from their dramatic fact patterns, from the funny, peculiar parties that inhabit them, or their exquisite exegesis in the hands of a gifted jurist. (Lawyers think of your own favorites, Vosberg v. Putney, Meinhard v. Salmon, any of the greats).

I believe that this extraordinarily powerful method can and should be deployed by legal theorists who are concerned with bringing the lessons of social science – and the “mind sciences” in particular – to legal analysis. We can develop and deploy a canon of particularly evocative studies that provide through their constant re-telling and continual re-examination a deep and shared understanding of the meaning of “situational influence” as we might want to make use of the concept in legal analysis. Consider the parable of the Good Situation, an experimental case I have featured in the Law and Behavioralism seminar I am presently teaching at Santa Clara.

You may be familiar with the story of the Good Samaritan, one of the more famous parables told by Jesus in the New Testament. That story actually emerges in the Gospel of Luke during a dialogue in which Jesus is fielding questions put to him by “a scholar of the law.” The scholar asks Jesus to define who counts as a “neighbor” for the purposes of applying the principle that one should love one’s neighbor as oneself. Jesus replies with the parable: a man was beat up by thieves and left on the road hurt and helpless, a priest came by and kept right on walking, a second man came by and walked on as well. Finally came a man from Samaria, who helped the injured man (even put him up in an inn). “Which of these three . . . was neighbor to the victim?’ Jesus asks. Well, the Good Samaritan, of course. (The scholar of laws answers that it was “the one who treated him with mercy,” to which Jesus replies, “Go and do likewise.”).

Jean-Francois Millet’s “The Good Samaritan” (1846)

The moniker “Good Samaritan” itself is not in the gospel, yet the familiarity of that term in our own society well signifies the dogged human tendency to attribute a person’s actions to their individual disposition (their inherent “goodness” or “fairness” or “selfishness”), often to the exclusion of appreciating the potent influence of external situation in accounting for people’s conduct. That potent influence can be grasped when we turn from the Good Samaritan to the parable of the Good Situation, which comes to us from a study John M. Darley and C. Daniel Batson did in 1973: three groups of seminary students were told that they were to give a brief sermon on a chosen topic to a group that was waiting in a building on the other side of the campus. The first group of seminarians was told that they had to hurry across campus, that they were already late. The second group was told to head right over because they were expected in just a few minutes. A third group was told that they weren’t expected for a little while, but that they might as well head over early. Along the path that the seminarians had to walk to reach their appointment laid a man, hurt and needing help (feigning, for he was a collaborator in the study under way). Now who was the neighborly one?

Of the seminarians in the “high hurry” and “medium hurry” situation, just 10 percent stopped to help the wounded man. But among those in the “low hurry” situation more than 60 percent stopped to help. Who was the neighborly one? The experimental parable reveals that rather than inquire about whom among us is good, we might do better to inquire into which situations are good, in the sense of influencing neighborly behavior.

In the biblical parable, the authority for discerning that the Samaritan was good and that the others were not is confidently given over to common sense and intuition. Jesus had only to ask his interlocutor who the neighborly one was, and the interlocutor gets it right immediately. Who couldn’t? Indeed, the point of the Good Samaritan parable as Jesus told it appears to be that it should be (and is) obvious to the “scholar of laws” what it means to act neighborly. (Indeed, the scholar easily gets its right despite the traditional enmity felt between Jews and Samaritans in the scholar’s society, a rich layer of the biblical story that is largely lost in our contemporary appreciation of the tale). Yet the parable of the Good Situation suggests that discerning the contours of moral (and legal) principles may be much more difficult than our intuition would leave us to believe. If we want to understand neighborliness, it turns out, we’re going to have to talk about the neighborhood, and not just the neighbors. The authority for the conclusion this time comes not from intuition, but from science. Indeed, this parable presses the importance of holding intuition in suspicion in order to detect the influence not only of situation, but of situational manipulation – for consider the pivotal part played in the parable by the experimenters’ framing of the errand, and by the collaborating wounded man. The parable well reveals that situation is not only more powerful than we tend to appreciate, but also that situation can be influenced, harnessed, and deployed in potent and predictable ways to shape human behavior that most of us, using common sense and intuition would tend to attribute to disposition. And in law we may be called upon to examine the influence of situation more perniciously or exploitively deployed than in the controlled social science evident in our parable.

Just as in conventional legal analysis, the examination of particular situations will always require reference to particularized cases and studies, operationalizing broader understandings. But sometimes a broad orientation helps to identify which narrower cases should be examined, and from which angles. Evocative tales like the Good Situation can help think such a situationist orientation into law.

Posted in Choice Myth, Legal Theory, Philosophy, Social Psychology | 5 Comments »

Think you’ve got magical powers?

Posted by Emily Pronin on February 5, 2007

Baseball, they say, is America’s game. But have you been to the ballpark recently? It seems we care about much more than just curve balls, bunts, stolen bases, and errors. More or less everyone playing or watching the sport is also focused on hexes, curses, jinxes, and any sort of outcome-altering magic.

In 2003, Curt Schilling, Boston’s pitching ace, helped the Red Sox win their first World Series (as if you hadn’t heard) in 86 years. But Beantown fans saw the win as much more than just a single World Series. To them, Schilling helped to overcome the “curse of the Bambino,” which had so long prevented the team from re-establishing its erstwhile preeminence.

curt-schilling.JPGWas it Schilling’s split-finger fastball that made the difference? Or, perhaps, was it one if his many rituals? On his journeys between dugout and mound, Schilling would leap over the foul line; before delivering his first pitch, he would pull a necklace from under his shirt and peck its pendant; and he never began his warmup routine for night games until exactly 6:45 p.m.

It’s not just the players who believe in magic. Fans, who never even touch the ball, appear to do all they can to influence a game through similarly superstitious practices. The sign most flashed by Red Sox fans that October, said simply “We Believe.” “Believe in what?” you might ask. It’s hard to say exactly, but the signs seemed to be a public profession of faith in magic, miracles, destiny, and an ability through collective will to reverse the curse.

The documentaries made about the curse-ending Series, treated the season as a miracle – a long-overdue reward for fans keeping the

Begins to make one wonder. What, really, is America’s pastime? Is it baseball? Or is it voodoo? Ok, it’s probably baseball. But at least one fan I know of combined the two: She made a voodoo doll of the opposing team’s coach and stuck pins in it before a critical tournament match-up. She still speaks with pride about the fact that her team won that night. Although few of us are sticking pins into dolls, there is growing evidence that our faith in things magical and our efforts to create magic are far more common and central to our daily lives than most of us would acknowledge.

Experiments that I’ve been conducting with Daniel Wegner, Sylvia Rodriguez, and Kimberly McCarthy have shown that people sometimes claim magical powers—personal responsibility for events they couldn’t possibly have controlled.

While most people would report believing that thoughts alone cannot cause external events, in these experiments people claimed responsibility for events that they had only willed to occur. For example, one experiment gauged whether people thought they had harmed another person when they stuck pins in a voodoo doll named after that person. Subjects in the experiment believed in the power of their voodoo hexes, but only if they had first generated evil thoughts about their victim.

Doll Used in ExperimentFor the voodoo experiment, subjects were led to think evil thoughts about another person who they believed was also a subject in the experiment (but who actually worked for the researchers). In a control condition, they were not led to think such thoughts. Each subject then stuck pins in a voodoo doll representing the alleged victim, who was seated at the table across from them. When the “victim” then faked having a headache, those who had harbored evil thoughts were more likely than their peers in a control condition to believe they had caused it.

In addition to experiments with voodoo hexes, we’ve also studied fans watching sports. In one study, subjects watched as a player shot baskets. Spectators were more likely to perceive that they had caused his success if they had first been asked to visualize his success (“Imagine the ball falling through the hoop”).

In another experiment conducted at a live basketball game (Princeton vs. Harvard), princeton-hoops.JPGsome spectators were given a task before the start of the game to think about how each of the starting players could contribute to it. Other audience members were not given this assignment (they instead were led to think about the players’ appearances). At halftime, those who had thought about the players’ potential contributions to the game reported having had more of an impact on the game than those in the control condition. In another study, people watching the NFL Super Bowl on television felt more responsible for that game’s outcome the more they thought about the game while watching it. Never mind that all of them had watched the game in front of a television at the campus student center.

Why would that be? Maybe the better question is, why not? Although the perception of mental power is (probably) without rational basis, the illusion of magic is comforting and, perhaps, adaptive. Belief in magic gives us hope, causal explanations, and the illusion of control – all of which we tend to crave – at times when any of those things might be hard to come by. Fears can be assuaged, threats can be tamed, stress can be eased, physical constraints can be transcended, and smoldering embers of hope can be rekindled when magic is possible.

Perhaps that partially explains why, assuming the folk wisdom is correct, individuals seem most likely to seek out magic in situations where they feel they have the least control over outcomes and where they face particularly salient threats. Research conducted in the labs of Daniel Gilbert (at Harvard) and Giora Keinan (at Tel Aviv University) suggest that it is at death’s door, or in times of extreme stress and danger, when many atheists find religion.


And, as reported in a recent CNN story, soldiers on the field of battle cling tightly to good luck charms. The video report from Iraq provides compelling and touching evidence of the powerful need we all may have – soldier, journalist, and viewer alike – to believe in magic.

Although our research focuses on the role of magic in our causal attributions, our findings may have implications for explaining many attributions of causation, responsibility, and blame, even when magic is not involved. They may help explain, for example, why “intent” is so often critical in law when rights and penalties are assessed. Perhaps that is one reason why a defendant’s thoughts prior to killing someone determine whether the killing was a murder in the first degree or something less. Take the same actions and add malicious premeditation, and the actor is considered far more blameworthy – that was true in our studies, and it’s true in the law. Maybe there is a connection.

Our article, titled “Everyday Magical Powers: The Role of Apparent Mental Causation in the Overestimation of Personal Influence,” appears in the August issue of the Journal of Personality and Social Psychology. (For a recent New York Times article summarizing that research and other interesting work on the sources and ubiquity of magical thinking, click here .)

Posted in Life, Situationist Sports, Social Psychology | 15 Comments »

The Big Game: What Corporations Are Learning About the Human Brain

Posted by Adam Benforado on February 4, 2007

Superbowl XLI

As I stake out my position on the couch this evening – close enough to reach the pretzels and my beer, but with an optimal view of the TV – it will be nice to imagine that the spectacle about to unfold is a sporting event. It shouldn’t be too hard: after all, there on the screen will be the field, Brian Urlacher stretching out his quads, Peyton Manning tossing a football, referees in their freshly-starched zebra uniforms milling about. Yes, I’ll think to myself, this has all the makings of a football game.

How foolish.

The Super Bowl isn’t about sports; it’s about making money. And with 90 million or so viewers, there is a lot of money to be made.

With CBS charging an estimated $2.6 million for each 30-second advertising spot, it’s no surprise that corporations don’t mess around with guessing what the most effective approach will be for selling their products. They call in the scientists.brain-on-advertising.jpg

For the second year in a row, FKF Applied Research has partnered with the Ahmanson-Lovelace Brain Mapping Center at the University of California, Los Angeles, to “measure the effect of many of the Super Bowl ads by using fMRI technology.” The research involves “track[ing] the ads on a host of dimensions by looking for activity in key parts of the brain areas that are known to be involved in wanting, choosing, sexual arousal, fear, indecision and reward.” As the FKF website explains, why this research is useful to Fortune 100 companies is that it

shows clearly that what people say in focus groups and in response to poll questions is not what they actually think, feel and do. fMRI scans using our analytical methods allow us to see beyond self report and to understand the emotions and thoughts that are driving (or impeding) behavior.

Looking beyond the spoken word provides immense and actionable insights into a brand, a competitive framework, advertising and visual images and cues.

As it turns out, “brand” lives in a particular place in the human brain:

[W]hen [FKF] did an academic study on the impact of iconic brands, such as Pepsi and Coke and McDonalds, [they] found that the same part of the brain lit up over images of sports logos – say, for the NBA or NFL. There is a clear connection in the human brain between the anticipation of eating that you get from, say, the Coke logo and with the NBA logo.


For someone like me, who has always wondered why I feel so hungry reading the sports page, this is interesting stuff. For a corporate CEO, this is extremely interesting – and actionable – stuff. For everyone else . . . this is a reason to be concerned.

Corporations are using science to figure out how our brains work so they can sell more products and what they are finding is that our brains don’t work the way we think they do.

Anticipating this worry, FKF has an Ethics tab on its website:

We are committed to the highest level of ethical behavior in conducting our work. We are determined to be diligent in carving out a new field, and being a leader and advocate in ensuring the best interests of our subjects, the public, and our clients are protected. . . . We believe that wide dissemination about how people make decisions will empower all concerned – both consumers and purveyors of information. Such information, freely discussed in a democracy, will allow us to understand better how marketing is affecting us, discredit manipulation, promote communication, and help illuminate a process that fundamentally shapes the lives of human beings.

Sounds good – in fact, it sounds like situationism, and I have no reason to think that the founders of FKF, or the university scientists with whom they work, aren’t upstanding citizens with good moral compasses. It’s just that I’m still uneasy.

Corporations don’t exactly have a good track record when it comes to learning counterintuitive information about human decision making and then using it responsibly. Rather, the best approach for maximizing shareholder profit is to discover some seemingly-illogical detail about the human brain, use that knowledge to sell more widgets, and then convince the public that their naïve (and incorrect) beliefs about how they make choices are, in fact, correct.

Take big tobacco: as Jon Hanson and others have documented, after figuring out that nicotine was addictive and could compel people to buy marlboro-sm.jpgMarlboros, cigarette companies made a concerted effort to both up nicotine concentrations in their products and convince people, through advertising, that they were rational actors who were not easily manipulated. From the perspective of an entity that is charged, through our legal rules, with making money (and not with doing social good), it makes little sense to alter peoples’ situations to get them to be better consumers and then tell them that you are doing it and that it matters.

Why, that would be as silly as announcing a weak-side blitz to the quarterback before the play. Sure, it would be the nice, ethical thing to warn decent gentlemen like Manning and Rex Grossman of the imminent threat, but it’s not part of the game we’ve developed. Football is a game where you can get blind-sided.

As corporations and our brains make certain, so is watching football.

(To view the full collection of 2006 Super Bowl ads, click here. To read about the results of a brain-scan study of men and women watching the 2006 Super Bowl by UCLA neuroscientist Marco Iacobini, click here. To listen to a recent one-hour NPR (On Point) program on “The Changing World of Advertising,” click here.)

Posted in Emotions, Food and Drug Law, Marketing, Situationist Sports | 6 Comments »

Imagine You Could Change Your Brain. Oops, You Just Did!

Posted by The Situationist Staff on February 3, 2007



Science writer Sharon Begley has a new, mind-bending book, Train Your Mind, Change Your Brain. Begley, who is one of the world’s leading science journalists, delivers good news for people (like us) who aren’t exactly thrilled with their current brains.

begley-book.jpgSummarizing the revolutionary research in neuroscience, Begley shows how our brains can flower neurons throughout our lives. She takes readers to the Mind and Life Institute’s meeting (Neuroplasticity: The Neuronal Substrates of Learning and Transformation, at Dharamsala, India in 2004) and uses the meeting as a springboard to survey the state of neuroplasticity. Publisher’s Weekly provides this summary:

With frequent tangents into Buddhist philosophy, Begley surveys current knowledge of neuroplasticity. Most interesting is a series of experiments with Buddhist adepts who have spent over 10,000 hours meditating. What these experiments show is tantalizing: it might be possible to train the brain to be better at feeling certain emotions, such as compassion. No less interesting are the hurdles the scientists face in recruiting participants; yogis replied that if these scientists wanted to understand meditation, they should meditate.

Illustration for TIME by David Plunkert

In the book’s conclusion, Begley writes, “The discovery of neuroplasticity, in particular the power of the mind to change the brain, is still too new for scientists, let alone the rest of us, to grasp its full meaning. But even as it offers new therapies for illnesses of the mind, it promises something more fundamental: a new understanding of what it means to be human.” Very Situationist.

To read a brief article by Begley in the recent Time issue on “How the Brain Rewires Itself,” click here .

To listen to a thirty-minute interview of Begley about her new book from NPR’s Talk of the Nation, click here.

Posted in Uncategorized | 5 Comments »

%d bloggers like this: