The Situationist

Archive for November, 2007

Edward O. Wilson’s Situationist Plea

Posted by The Situationist Staff on November 29, 2007

Earth & Moon

In this month’s issue of The Atlantic, Harvard biologist Edward O. Wilson makes a compelling plea to Americans to consider the powerful forces around them and within them that are too little understood or too often ignored. Although he doesn’t express his point in exactly situationist terms, his does seem a situationist message.

He asserts that “the central issue” we face “is sustainable development” or altering this course we’re on toward “wrecking the planet.” “The problem,” Wilson explains, is simple:

“Long-term thinking is for the most part alien to the American mind. . . . To look far forward and to acquire enough accurate vision requires better self-understanding. That in turn will depend on a grasp of history–not just of the latest tick of the geological clock that transpired during the republic’s existence, but of deep history, across the millennia when genetic human nature evolved. . . . Our basic qualities . . . present the greatest risk to the security of civilization.”

Although Americans may have invented the concepts of “conservation and environmentalism,” Wilson continues, we have treated the goals as a “hobby” when we should recognize them as a “survival practice.”

“Now we need a stronger ethic . . . . the foundation of which will be the recognition that humanity was born within the biosphere, and that we are a biological species in a biological world. Like the other species teeming around us, we are exquisitely adapted to this biosphere and to no other.”

Wilson warns that failing to recognize that simple truth — failing to appreciate our profound connection to dependence upon our situation — is a sure way for us to lose it all. Understanding our situation and “an allegiance to our biological heritage will be our ultimate strength.”

* * *

To read a related Situationist post, see “The Heat Is On.” For a remarkable (24-minute) talk by Edward O. Wilson, view the video below, in which he accepts his 2007 TED Prize and makes a plea on behalf of his constituents, the insects and small creatures, to learn more about our biosphere.

Posted in Life | 1 Comment »

The (Unconscious) Situation of our Consciousness – Part III

Posted by The Situationist Staff on November 29, 2007

briefcase-toothbrush.png

This is the third in a series of posts summarizing the research on the hidden situation of our consciousness. The first two posts drew from a 2003 article by Situationist contributors Jon Hanson and David Yosifon The Situational Character.” Part I began with Hanson and Yosifon’s summary of some of the fascinating research revealing the ubiquity of “automaticity.” Part II asked the question: “If most of what we perceive, feel, and do is driven by automatic processes, then why is it that most of us perceive most of our behavior to be the consequence of our conscious will?”

This post draws from a 2004 press release by Marguerite Rigoglioso (from Stanford Graduate School of Business) describing fascinating research by social psychologists Christian Wheeler, Aaron Kay, and Lee Ross.

* * *

It’s pretty obvious that you can tell lot about a person from the way she outfits her home or office. But what you may not know is that your own behavior can be subtly influenced by her choice of items when you’re in that space—without your even realizing it. In studying this effect, Christian Wheeler, assistant professor of marketing, has found that certain types of objects can in fact elicit very specific kinds of behavior.

Wheeler and three other researchers, including Aaron Kay [from the University of Warterloo’s psychology department] and Lee Ross from Stanford’s psychology department, carried out a number of studies in which they exposed individuals to objects common to the domain of business, such as boardroom tables and briefcases, while another group saw neutral objects such as kites and toothbrushes. They then gave all of the participants tasks designed to measure the degree to which they were in a cooperative or competitive frame of mind.

kite-table.pngIn every case, participants who were “primed” by seeing the business objects subsequently demonstrated that they were thinking or acting more competitively. The effect was the strongest when they had to respond in situations that were deliberately ambiguous. When questioned, however, participants denied that being exposed to business-related objects had influenced their behavior in any way.

“People are always trying to figure out how to act in any given situation, and they look to external cues to guide their behavior particularly when it’s unclear what’s expected of them,” Wheeler says. “When there aren’t a lot of explicit cues to help define a situation, we are more likely to act based on cues we pick up implicitly.” Simple exposure to business-related objects, it turns out, can activate the “cognitive components” that are associated with competitive behavior, he says.

For example, participants who had previously looked at pictures of business-relevant materials completed more word fragments, such wa_, _ight, and c__p___tive, using competition-related words—such as war (vs. was), fight (vs. tight), and competitive (vs. cooperative)—than those in the neutral condition. Such participants also evaluated an ambiguously written scenario involving two men who are undergoing a certain degree of conflict as being much more about competition than cooperation.

Another study transferred the effect to the real world. Participants were given $10 and asked to decide how much they were willing to share with a partner. The catch was that the partner could refuse any offer perceived to be too low, in which case neither participant would receive anything. While subjects exposed to neutral pictures generally split the money 50-50, only 33 percent of those who looked at business-related objects did, showing that they had become less cooperatively oriented. Results were similar when participants were exposed in the experiment room to actual business-related objects, such as a briefcase and an executive pen, as opposed to a backpack and a wooden pencil.

The effect was lessened, however, when the strategy game that participants were asked to play was deliberately depicted as being cooperative in nature. “This shows that when people are given an explicit context for how to behave, there is less room for business primes to exert an influence,” Wheeler explains.

“These are pretty big effects with pretty minor manipulations,” he says. The fact that participants were unaware that their behavior had been quotation4.pnginfluenced even when this fact was pointed out to them in the debriefing after the experiment is also significant. “We’re simply not conscious of how many of the things all around us affect our behavior,” he notes. This can be true, he says, even if we are not simply receiving the messages through subliminal tricks such as rapid image flashing in advertising, which is designed to circumvent our conscious awareness—but when we are seeing the objects right in front of us, as the participants in the study demonstrated.

Other research has shown that words, concepts, and images can subliminally influence people’s behavior, but Wheeler’s is the first experimental work to show that objects can, as well. One implication of these studies, he notes, is that businesses may want to take a more serious look at how their office decor can be designed to encourage either competition or cooperation.

* * *

For a sample of previous posts (that are not part of this series) discussing the role of unconscious and automatic causes of behavior, see “The Situation of Reason,” “The Situation of Ideology – Part I,” “The Magnetism of Beautiful People,” and “The Unconscious Genius of Baseball Players.”

Posted in Choice Myth, Deep Capture, Implicit Associations, Marketing, Social Psychology | 2 Comments »

Neural (Situational) Sources of What We See

Posted by The Situationist Staff on November 28, 2007

Faces or Vase?A recent press release on Medical News Today describes research shedding light on the old optical illusion in which people see two faces or a vase. The research indicates that what we see depends on neural activity within our brains. We excerpt portions of the press release below.

* * *

“In this example, whether you see faces or vases depends entirely on changes that occur in your brain, since the image always stays exactly the same,” said John Serences, a UC Irvine cognitive neuroscientist.In a recent study published in the Journal of Neuroscience, Serences and co-author Geoffrey Boynton, associate professor at the University of Washington, found that when viewing ambiguous images such as optical illusions, patterns of neural activity within specific brain regions systematically change as perception changes. More importantly, they found that patterns of neural activity in some brain regions were very similar when observers were presented with comparable ambiguous and unambiguous images.

“The fact that some brain areas show the same pattern of activity when we view a real image and when we interpret an ambiguous image in the same way implicates these regions in creating the conscious experience of the object that is being viewed,” Serences said.

Findings from their study may further contribute to scientists’ understanding of disorders such as dyslexia – a case in which individuals are thought to suffer from deficiencies in processing motion – by providing information about the functional role that specific brain regions play in motion perception.

Using functional magnetic resonance imaging (fMRI), researchers measured patterns of neural activity in the middle temporal (MT) region of the brain – an area associated with motion perception – under two different scenarios.

In the first, participants were asked to view objects moving only in one direction and to identify the direction in which the objects were moving (left or right). They were then presented with objects in which the direction of motion was ambiguous, or undefined, and asked to identify the main direction of motion.

The pattern of neural activity in the MT region was highly similar when an observer viewed real motion moving to the left and when they thought they saw the ambiguously moving objects moving to the left.

“The close correspondence between the pattern of activation in MT and what the observer reports seeing suggests that this region of your brain plays an important role in generating conscious experience of the world around you,” Serences said.

* * *

For terrific selection of optical illusions, we recommend both 75 Optical Illusions and Visual Phenomena and Mighty Optical Illusions.

Posted in Neuroscience | Leave a Comment »

The Science of Morality

Posted by The Situationist Staff on November 27, 2007

Time Magazine CoverIn this week’s Time Magazine, Jeffrey Kluger offers an excellent cover story on what neuroscientists and other mind scientists have discovered about how our morality connects to our brains. Below we excerpt a few segments of the story.

* * *

We’re a species that is capable of almost dumbfounding kindness. We nurse one another, romance one another, weep for one another. Ever since science taught us how, we willingly tear the very organs from our bodies and give them to one another. And at the same time, we slaughter one another. The past 15 years of human history are the temporal equivalent of those subatomic particles that are created in accelerators and vanish in a trillionth of a second, but in that fleeting instant, we’ve visited untold horrors on ourselves–in Mogadishu, Rwanda, Chechnya, Darfur, Beslan, Baghdad, Pakistan, London, Madrid, Lebanon, Israel, New York City, Abu Ghraib, Oklahoma City, an Amish schoolhouse in Pennsylvania–all of the crimes committed by the highest, wisest, most principled species the planet has produced. That we’re also the lowest, cruelest, most blood-drenched species is our shame–and our paradox.

The deeper that science drills into the substrata of behavior, the harder it becomes toGorrila Using Tool preserve the vanity that we are unique among Earth’s creatures. We’re the only species with language, we told ourselves–until gorillas and chimps mastered sign language. We’re the only one that uses tools then–but that’s if you don’t count otters smashing mollusks with rocks or apes stripping leaves from twigs and using them to fish for termites.

* * *

“Moral judgment is pretty consistent from person to person,” says Marc Hauser, professor of psychology at Harvard University and author of Moral Minds. “Moral behavior, however, is scattered all over the chart.” The rules we know, even the ones we intuitively feel, are by no means the rules we always follow.

Where do those intuitions come from? And why are we so inconsistent about following where they lead us? Scientists can’t yet answer those questions, but that hasn’t stopped them from looking. Brain scans are providing clues. Animal studies are providing more. Investigations of tribal behavior are providing still more. None of this research may make us behave better, not right away at least. But all of it can help us understand ourselves–a small step up from savagery perhaps, but an important one.

* * *

Nadia Kohts One of the first and most poignant observations of empathy in nonhumans was made by Russian primatologist Nadia Kohts, who studied nonhuman cognition in the first half of the 20th century and raised a young chimpanzee in her home. When the chimp would make his way to the roof of the house, ordinary strategies for bringing him down–calling, scolding, offers of food–would rarely work. But if Kohts sat down and pretended to cry, the chimp would go to her immediately. “He runs around me as if looking for the offender,” she wrote. “He tenderly takes my chin in his palm . . . as if trying to understand what is happening.”

You hardly have to go back to the early part of the past century to find such accounts. Even cynics went soft at the story of Binta Jua, the gorilla who in 1996 rescued a 3-year-old boy who had tumbled into her zoo enclosure, rocking him gently in her arms and carrying him to a door where trainers could enter and collect him. “The capacity of empathy is multilayered,” says primatologist Frans de Waal of Emory University, author of Our Inner Ape. “We share a core with lots of animals.”

* * *

While it’s impossible to directly measure empathy in animals, in humans it’s another matter. Hauser cites a study in which spouses or unmarried couples underwent functional magnetic resonance imaging (fMRI) as they were subjected to mild pain. They were warned before each time the painful stimulus was administered, and their brains lit up in a characteristic way signaling mild dread. They were then told that they were not going kluger-quotation1.pngto feel the discomfort but that their partner was. Even when they couldn’t see their partner, the brains of the subjects lit up precisely as if they were about to experience the pain themselves. “This is very much an ‘I feel your pain’ experience,” says Hauser.

* * *

Pose these dilemmas to people while they’re in an fMRI, and the brain scans get messy. Using a switch to divert the train toward one person instead of five increases activity in the dorsolateral prefrontal cortex–the place where cool, utilitarian choices are made. Complicate things with the idea of pushing the innocent victim, and the medial frontal cortex–an area associated with emotion–lights up. As these two regions do battle, we may make irrational decisions. In a recent survey, 85% of subjects who were asked about the trolley scenarios said they would not push the innocent man onto the tracks–even though they knew they had just sent five people to their hypothetical death. “What’s going on in our heads?” asks Joshua Greene, an assistant professor of psychology at Harvard University. “Why do we say it’s O.K. to trade one life for five in one case and not others?”

* * *

For the rest of the article, click here. For a related Situationist post, see “Your Brain and Morality.”

Posted in Neuroscience, Social Psychology | 3 Comments »

56*

Posted by Will Li on November 26, 2007

Joe DiMaggioAn ABC news report by John Allen Paulos highlights an article in the Canadian general interest magazine Walrus in which author David Robbeson examines Joe DiMaggio’s 56-game hitting streak. Robbeson asks: “[w]as the streak the most singular sustained accomplishment in the history of sport or the work of a collective imagination seeking a new mythology?”

In his analysis of the streak, Robbeson looks at the way in which baseball, and the DiMaggio streak, was consumed and disseminated at the time. Some of his historical notes may be surprising to today’s fans.

* * *

The distractions of the war combined with the limitations of the media of the times to keep the particulars of the streak from public scrutiny. Though baseball was first broadcast on television in 1939 (a game between Princeton and Columbia), it wasn’t until after the war that telecasts became common. Radio had been integral to the national pastime since the thirties, and by the time war broke out many teams broadcast their entire schedules, but the Yankees were unable to attract a corporate sponsor willing to pay $75,000 to broadcast home games during the summer of ‘41. So, short of attending games themselves, fans of the Bronx Bombers could follow the streak only by reading the papers or listening to a nightly fifteen-minute radio re-enactment on wins. Every DiMaggio at-bat not witnessed in person was thus filtered, condensed, and dramatized — fully left to the imagination.

Joe Dimaggio HitAs the streak progressed, it developed an odd and frequently misunderstood media inertia. Baseball people had never been especially smitten with the notion of a consecutive-games hitting streak, and newspapers began to keep track of it more as a statistical oddity than as a phenomenon that would immediately capture America’s imagination, as history would have us believe. The New York Times, for its part, never mentioned sports stories on its front page — sports simply lacked gravitas. For accounts of DiMaggio’s exploits, a Times reader had to turn as far back as page twenty-five, after the arts and entertainment pages, then skippast stories on the New York Giants and Brooklyn Dodgers, whom the Times covered with equal vigilance. In less-vaunted publications such as the Sporting News, DiMaggio was feted and lionized, but for the world at large the streak was page-twenty-five news. Even the day after it ended, the front page of the Times paid it no heed.

Outside of New York, reaction was mixed. Popular mythology holds that fans in other American League cities turned out in droves — largely, if not solely, to watch DiMaggio extend his record. DiMaggio biographers and many baseball historians seize upon large crowds, such as the one in Cleveland the night the streak came to an end, as evidence of public fervour. Actual attendance numbers tell a different story. Twenty-two of the fifty-six games saw crowds of fewer than 10,000 fans. Game forty-five, when DiMaggio broke Keeler’s record, was witnessed by only 8,682 people — in Yankee Stadium no lessJoe Dimaggio. All of 1,625 people witnessed the streak hit fifty in St. Louis. The sellouts noted by history were usually the result of doubleheaders, which drew fans seeking the bargain of an extra game, or contests played under lights, which were still relatively rare in 1941. And though more than 67,000 fans watched the streak end (under lights) in Cleveland, only 15,000 ventured to the game the previous day. This surprising variance in public attention allowed DiMaggio’s streak to progress quietly, and left those who helped perpetuate it to do so unnoticed.

In an essay related to his poem “Examination of the Hero in a Time of War,” Wallace Stevens called the poetry of war “a consciousness of fact, but of heroic fact, of fact on such a scale that the mere consciousness of it affects the scale of one’s thinking and constitutes a participating in the heroic.” A similar myth-making impulse seemed to affect the sports journalists of the era.

* * *

John Allen Paulos takes a good look at Robbeson’s argument in his ABC News piece, pointing out Steven Jay Gould‘s more well-known criticism of the streak, and even includes a bit of social psychology analysis:

* * *

Gould argues that these two, at best, weak hits as well as a couple of others seem out of place in a record set by a mythical hitter like DiMaggio. The reason is that people tend to believe that streaks are a causal consequence of courage and competence and that their lucky extension is somehow an affront to our conception of them. DiMaggio is too great a figure, people unconsciously think, to have his streak depend on such thKirk Gibsonin threads.

As psychologists Amos Tversky and Daniel Kahneman demonstrated years ago, however, people fervently mistakenly believe in hot hands, in clutch hitters, in coming through under pressure, and don’t want to think of streaks as simply matters of luck. But luck is sometimes just that, and good hitters benefit more from it than do bad hitters. They will generally hit in longer streaks than will bad hitters just as heads-biased coins will result in longer strings of consecutive heads than tails-biased coins will.

In other words, DiMaggio’s streak remained intact because of these calls by Daniel, but so what? Some lucky breaks and a dubious call or two are to be expected in a long streak.

* * *

One of the questions raised by Robbeson’s analysis of the streak, and one which Paulos speculates about as well, is how we treat feats and records today. Even outside of baseball, there have been talks of asterisks – Don Shula has discussed asterisking the Patriots record if they manage to go 19-0, which, amusingly, has led some to question the impressiveness of the 1972 Dolphins. (Shula has since backed off his statements.)

Within baseball, while DiMaggio’s hitting streak itself is brought up every time a player gets much past the halfway mark (recently, Jimmy Rollins, Chase Utley, Willy Taveras and Moises Alou have had 30-game hitting streaks), it might be more telling to look at another mark; Barry Bonds and his chase of Hank Aaron’s home run record.

Could the “myth-making” impulse that Robbeson refers to be alive in a contrary sense today? Should we consider the attention Bonds receives from the media–only amplified in the wake of his recent indictment–to be “myth-destroying?” Criticisms have been leveled at media attention to Bonds and the information they present. For example, Larry Brown at AOL Fanhouse posted an interesting analysis of ESPN’s poll which purported to reveal that the public perception of Bonds was influenced by race. Other critics of print journalists who have written about Bonds and what he does to the purity of the sport point to the attendance numbers of the MLB. If the fans are still filling the seats, should the media be so focused on Bonds and how he has tainted the sport?

Barry Bonds Breaks Record

Or, just as the legend of Joltin’ Joe’s streak grew over subsequent generations regardless of the scant attention paid during its formation, will Bonds’s legacy be decided in the future? If the retrospective analysis of a record is so important to its legacy, how much importance should we give the potential fate of the record-setting ball?

After he bought the record setting ball, designer Marc Ecko took an online poll and from the results, has announced that he will asterisk the ball and send it to the Hall of Fame. In response, Bonds pledged to boycott the Hall of Fame if they accept the ball. But how much of a player’s legacy is actually decided by who or what is included in the Hall of Fame?

Posted in Situationist Sports, Social Psychology | 2 Comments »

The Situational Rewards of Wages

Posted by The Situationist Staff on November 24, 2007

jim-sinegal-by-rick-dahms-for-time.jpgThe New York Times‘s Louis Uchitelle wrote an interesting article this summer about what many are calling the “New Guilded Age” and the way the other one-one-hundredth of a percent lives.

In it, Uchitelle quotes Jim Sinegal, the modestly remunerated Costco CEO, who argues that other business leaders are driven less by absolute amounts than relative quantities of money. “I think that most of the people running companies today are motivated and pay is a small portion of the motivation,” Mr. Sinegal said. What, then, explains the skyrocketing wages top executives in the business world? “Because everyone else is getting it,” Sinegal believes. “It is as simple as that. If somehow a proclamation were made that C.E.O.’s could only make a maximum of $300,000 a year, you would not have any shortage of very qualified men and women seeking the jobs.”

It’s a plausible hypothesis, and Sinegal is by no means the first to offer it. But a just published study in Science provides a new kind of confirmation for the theory as reported by the BBC News, in the article excerpted below.

* * *

Traditional economic theory assumes the only important factor is the absolute size of the reward.But researchers in the journal Science have shown the relative size of one’s earnings play a major role.

In the study, 38 pairs of male volunteers were asked to perform the same simple task simultaneously, and promised payment for success.

Both “players” were asked to estimate the number of dots appearing on a screen. Providing the right answer earned a real financial reward between 30 (£22) and 120 (£86) euros. Each of the participants was told how their partners had performed and how much they were paid.

* * *

high-activation-low-activation.jpg Using magnetic resonance tomographs, the researchers examined the volunteers’ blood circulation throughout the activities. High blood flow indicated that the nerve cells in the respective part of the brain were particularly active. Neuroscientist Dr Bernd Weber explains: “One area in particular, the ventral striatum, is the region where part of what we call the ‘reward system’ is located. In this area, we observed an activation when the player completed his task correctly.”

A wrong answer, and no payment, resulted in a reduction in blood flow to the “reward region.” But the area “lit up” when volunteers earned money, and interestingly showed far more activity if a player received more than his partner.

This indicated that stimulation of the reward centre was not merely linked to individual success, but to the success of others.

While behavioural experiments have suggested relative rewards may play a role in economic motivation, economist Professor Dr. Armin Falk, co-author of the paper, said: “It is the first time this hypothesis has been challenged using such an experimental approach.”

The professor emphasised to BBC News, that unlike behavioural experiments, brain scans had “no cognitive filter; we were monitoring immediate brain reaction”.

* * *

To read the entire BBC summary, click here. To listen to a terrific seven-minute report from NPR’s Weekend America show on the “gilded age” — “a time just like ours, but way, way uglier” — click here.

Posted in Neuroscience | 2 Comments »

The Implicit Value of Explicit Values

Posted by The Situationist Staff on November 23, 2007

basketball-player-ceo.png

In September of 2006, Karen Rouse wrote a Denver Post article summarizing the fascinating research of social psychologist Geoffrey Cohen. We excerpt portions of her article below.

* * *

A white male may feel comfortable in the boardroom. But put him on the basketball court with a team of black players, and his own awareness of the stereotype – that white men can’t jump – could be enough to hurt his game.

And so it is, researchers say, with African-American students who are aware of the negative stereotype that they are less intelligent than their white peers; the psychological threat of that stereotype could be enough to undermine their academic performance.

But new research by a University of Colorado professor suggests that a simple exercise in self-affirmation could not only result in higher grades for African-American students but also close the black-white achievement gap.

The research, led by CU-Boulder professor Geoffrey Cohen and Yale research scientist Julio Garcia and published this month in the journal Science, studied black and white seventh- graders at a Northeast middle school.

Black students who were asked to write about their most important values at the first of the school year earned much higher grades at the end of the three-month term than students who were asked to write about values least important to them.

Past research has already demonstrated that people experience stress in situations where they know they can be stereotyped, Cohen said.

“We all belong to social groups that are stereotyped,” Cohen said. “For whites, it’s relevant to sports but not academically. But for African- Americans and Latino Americans, it threatens their academic ability.”

The exercise of writing about important values has been shown to reduce stress, Cohen said. For the students, it was “affirming your sense of self- integrity. ‘This is who I am. This is what makes me, me.”‘

Image from Hutchinson Article — Pasadena WeeklyThe students were divided into two groups, with even numbers of black and white students in each group, and given an identical list of values, such as “politics,” “relationships with family,” “religion” and “being good at art.”

One group was told to pick values important to them and write for 15 minutes about why they were important. The second group was told to pick values least important to them and to explain why they might be important to someone else. The experiment was conducted twice – in 2003 and 2004 – at the same school.

The exercise was done at the beginning of the school year, when stress was thought to be highest. At the end of a roughly three-month academic term, researchers Cohen and Garcia found there was no difference between how the two groups of white students performed.

However, black students in the first group, which wrote about personal affirmations, had higher grades than the black students in the group with the more neutral assignment.

In addition, the achievement gap in grade-point averages between blacks and whites narrowed by roughly 40 percent, Cohen said.

The research is significant because the results hinged on “unleashing what is already there in the environment,” Cohen said.

The conditions for the students performing well already existed but were being held back by the stereotype threat.

* * *
Cohen said that while 15 minutes seems short, the exercise “is bringing to the fore something that is very important to these kids: a long-held personal value.”

Cohen also said the exercise may not have the same effect in another setting, such as an all-black urban school or with all poor white students. “This is not a silver bullet” to fix the achievement gap, he said. “There needs to be a lot more research.”

* * *

To access the Science article, click here. For some Situationist posts discussing stereotype threat and its effects, see “The Situation of ‘Winners’ and ‘Losers,'” “Gender Imbalanced Situation of Math, Science, and Engineering,” “Race Attributions and Georgetown University Basketball,” Sex Differences in Math and Science,” “You Shouldn’t Stereotype Stereotypes,” “Women’s Situaiton in Economics,” and “Your Group is Bad at Math.”

Posted in Implicit Associations, Social Psychology | 4 Comments »

Thanksgiving as “System Justification”?

Posted by J on November 21, 2007

The first Thanksgiving, painting by Jean Louis Gerome Ferris

Thanksgiving has many associations — struggling Pilgrims, crowded airports, autumn leaves, heaping plates, drunken uncles, blowout sales, and so on. At its best, though, Thanksgiving is associated with, well, thanks giving. The holiday provides a moment when many otherwise harried individuals leading hectic lives decelerate just long enough to muster some gratitude for their harvest. Giving thanks — acknowledging that we, as individuals, are not the sole determinants of our own fortunes seems an admirable, humble, and even situationist practice, worthy of its own holiday.

But I’m interested here in the potential downside to the particular way in which many people go about giving thanks.

Situationist contributor John Jost and his collaborators have studied a process that they call “system justification” — loosely the motive to defend and bolster existing arrangements even when doing so seems to conflict with individual and group interests. Jost, together with Aaron Kay and several other co-authors, recently summarized the basic tendency to justify the status quo this way (pdf):

Whether because of discrimination on the basis of race, ethnicity, religion, social class, gender, or sexual orientation, or because of policies and programs that privilege some at the expense of others, or even because of historical accidents, genetic disparities, or the fickleness of fate, certain social systems serve the interests of some stakeholders better than others. Yet historical and social scientific evidence shows that most of the time the majority of people—regardless of their own social class or position—accept and even defend the legitimacy of their social and economic systems and manage to maintain a “belief in a just world” . . . . As Kinder and Sears (1985) put it, “the deepest puzzle here is not occasional protest but pervasive tranquility.” Knowing how easy it is for people to adapt to and rationalize the way things are makes it easer to understand why the apartheid system in South Africa lasted for 46 years, the institution of slavery survived for more than 400 years in Europe and the Americas, and the Indian Caste system has been maintained for 3000 years and counting.

Manifestations of the system-justification motive pervade many of our cognitions, ideologies, and institutions. This post reflects my worry that the Thanksgiving holiday might also manifest that powerful implicit motive. No doubt, expressing gratitude is generally a healthy and appropriate practice. Indeed, my sense is that Americans too rarely acknowledge the debt they owe to other people and other influences. There ought to be more thanks giving.

Nonetheless, the norm of Thanksgiving seems to be to encourage a particular kind of gratitude — a generic thankfulness for the status quo. Indeed, when one looks at what many describe as the true meaning of the holiday, the message is generally one of announcing that current arrangements — good and bad — are precisely as they should be.

Consider the message behind the first presidential Thanksgiving proclamation. In 1789, President George Washington wrote:

“Now therefore I do recommend and assign Thursday the 26th day of November next to be devoted by the People of these States to the service of that great and glorious Being, who is the beneficent Author of all the good that was, that is, or that will be—That we may then all unite in rendering unto Him our sincere and humble thanks—for His kind care and protection of the People of this Country . . . for the signal and manifold mercies, and the favorable interpositions of his Providence which we experienced in the tranquility, union, and plenty, which we have since enjoyed . . . and also that we may then unite in most humbly offering our prayers and supplications to the great Lord and Ruler of Nations and beseech him to pardon our national and other transgressions . . . . To promote the knowledge and practice of true religion and virtue, and the increase of science among them and us—and generally to grant unto all Mankind such a degree of temporal prosperity as he alone knows to be best.”

Bush - Times OnlineExisting levels of prosperity, by this account, reflect the merciful and omniscient blessings of the “beneficent Author” of all that is good.

More recently, President George W. Bush offered a similar message about the meaning of the holiday:

“In the four centuries since the founders . . . first knelt on these grounds, our nation has changed in many ways. Our people have prospered, our nation has grown, our Thanksgiving traditions have evolved — after all, they didn’t have football back then. Yet the source of all our blessings remains the same: We give thanks to the Author of Life who granted our forefathers safe passage to this land, who gives every man, woman, and child on the face of the Earth the gift of freedom, and who watches over our nation every day.”

The faith that we are being “watched over” and that our blessings and prosperity are the product of a gift-giving force is extraordinarily affirming. All that “is,” is as that “great and glorious Being” intended.

Fom such a perspective, giving thanks begins to look like a means of assuring ourselves that our current situation was ordained by some higher, legitimating force. To doubt the legitimacy of existing arrangements is to be ungrateful.

A cursory search of the internet for the “meaning of Thanksgiving” reveals many similar recent messages. For instance, one blogger writes, in a post entitled “Teaching Children the Meaning of Thanksgiving,” that:

your goal should be to move the spirit of Thanksgiving from a one-day event to a basic life attitude. . . . This means being thankful no matter what our situation in life. Thankfulness means that we are aware of both our blessings and disappointments but that we focus on the blessings. . . . Are you thankful for your job even when you feel overworked and underpaid?”

Another piece, entitled “The Real Meaning of Thanksgiving” includes this lesson regarding the main source of the Pilgrim’s success: “It was their devotion to God and His laws. And that’s what Thanksgiving is really all about. The Pilgrims recognized that everything we have is a gift from God – even our sorrows. Their Thanksgiving tradition was established to honor God and thank Him for His blessings and His grace.”

If we are supposed to be thankful for our jobs even when we are “overworked and underpaid,” should we also be thankful for unfairness or injustice? And if we are to be grateful for our sorrows, should we then be indifferent toward their earthly causes?

A third article, “The Productive Meaning of Thanksgiving” offers these “us”-affirming, guilt-reducing assurances: “The deeper meaning is that we have the capacity to produce such wealth and that we live in a country that affords us our right to exercise the virtue of productivity and to reap its rewards. So let’s celebrate wealth and the power in us to produce it; let’s welcome this most wonderful time of the year and partake without guilt of the bounty we each have earned.”

That advice seems to mollify any sense of injustice by giving something to everyone. Those with bountiful harvests get to enjoy their riches guiltlessly. Those with meager harvests can be grateful for the fact that they live in a country where they might someday enjoy richer returns from their individual efforts.

quotation-thanksgiving-3.pngYet another post, “The Meaning for Thanksgiving,” admonishes readers to be grateful, because they could, after all, be much worse off:

[M]aybe you are unsatisfied with your home or job? Would you be willing to trade either with someone who has no hope of getting a job or is homeless? Could you consider going to Africa or the Middle East and trade places with someone that would desperately love to have even a meager home and a low wage paying job where they could send their children to school without the worry of being bombed, raped, kidnapped or killed on a daily basis?

* * *

No matter how bad you think you have it, there are people who would love to trade places with you in an instant. You can choose to be miserable and pine for something better. You could choose to trade places with someone else for all the money they could give you. You could waste your gift of life, but that would be the worst mistake to make. Or you can rethink about what makes your life great and at least be happy for what you have then be patient about what you want to come to you in the future.

If your inclination on Thanksgiving is to give thanks, I do not mean to discourage you. My only suggestion is that you give thanks, not for the status quo, but for all of the ways in which your (our) own advantages and privileges are the consequence of situation, and not simply your individual (our national) disposition. Further, I’d encourage you to give thanks to all those who have gone before you who have doubted the status quo and who have identified injustice and impatiently fought against it.

Happy Thanksgiving!

Posted in Events, History, Ideology, System Legitimacy | 4 Comments »

The (Unconscious) Situation of our Consciousness – Part II

Posted by The Situationist Staff on November 20, 2007

Representation of consciousness from the 17th century.This is the second in a series of posts summarizing the research on the hidden situation of our consciousness. This post, like Part I, draws from a 2003 article by Situationist contributors Jon Hanson and David Yosifon The Situational Character.” Part I began with Hanson and Yosifon’s summary of some of the fascinating research revealing the ubiquity of “automaticity.” This post picks up there with the question: “If most of what we perceive, feel, and do is driven by automatic processes, then why is it that most of us perceive most of our behavior to be the consequence of our conscious will?”

* * *

There are several reasons. First, we are rarely conscious of the automatic – indeed, that’s the point: automaticity frees up for other purposes our extremely limited capacity for conscious thinking or acting. It is as if automaticity occurs silently in the dark, whereas conscious thinking happens noisily beneath a spotlight. For the same reason that homicides seem a far more common cause of death than stomach cancer (even though the reverse is true), the conscious eclipses the automatic before our introspective eye.

Perhaps more important, when we do experience ourselves consciously willing our actions, we are often mistaken. Daniel Wegner, in his superb book, The Illusion of Conscious Will, brings together an intriguing array of direct evidence to make his case that we humans are subject to an illusion of will (which . . . we think has been circumstantially implied in evidence of the more general illusion of disposition).

Consider the case of “phantom limbs.” People who have had an arm or a leg amputated usually report that they continue to “feel” the presence of the limb long after it is gone. One pair of researchers who studied a group of three hundred World War II amputees found that ninety-eight percent experienced the phantom limb phenomenon. But there is more: Many amputees report that they can voluntarily move their phantom limbs, especially their fingers and toes. They report having the experience of consciously willing the movement of the limb despite the absence of either. This is one intriguing piece of evidence that “the intention to move can create the experience of conscious will without any action at all.”

Another study provides another clue to the puzzle of conscious will. Researchers used highly sensitive electromyographical devices to study the patterns of electrical impulses generated during the performance of a “willed action.” Hooked to electrodes, subjects were asked to move their fingers “at will.” The researchers established a baseline electrical impulse that was witnessed in the brain shortly before the subjects moved their finger, which preceded a second impulse that was seen when the finger actually moved. This first impulse register was dubbed “readiness potential.” In a recent version of the study, subjects were placed before an especially sensitive clock, and were asked to report for each finger movement the position of the clock hand at the moment that they experienced a “conscious awareness of ‘wanting’ to perform the finger movement.” The researchers found that they were able to identify three distinct blips (to use the scientific term) in the electrical impulses of the brain throughout the course of action. The first was the “readiness potential” registered in the baseline. Sometime after the “readiness potential,” however, came the experience of willing the finger. Finally, in a third distinct moment, there was an impulse associated with the actual movement of the finger. The researchers discovered that the subject’s “readiness potential” occurred distinctly before the subjects themselves perceived consciously wanting to move the finger. The experience of conscious will, it appears, arises at some point after the brain has already begun the action. As the chief researcher of this study concluded:

[T]he initiation of the voluntary act appears to be an unconscious cerebral process. Clearly, free will or free choice of whether to act now could not be the initiating agent, contrary to one widely held view. This is of course also contrary to each individual’s own introspective feeling that he/she consciously initiates such voluntary acts; this provides an important empirical example of the possibility that the subjective experience of a mental causality need not necessarily reflect the actual causative relationship between mental and brain events.

http://amor.rz.hu-berlin.de/~h0998dgh/philom/consc/consc.html

In another fascinating study, researchers put a series of subjects into a “transcranial magnetic stimulation” device, which has been found to cause—through a directed magnetic impulse—the involuntary movement of different parts of the human body. Without explaining the operation of the device to subjects, the experimenters asked subjects to move either their right or their left finger, whichever they chose, whenever they heard a click. The click was actually the sound of the device turning on, and forcing the movement of a particular digit. Although the magnetic impulses led the subjects to move the finger they moved, the subjects nevertheless perceived that they were choosing which finger to move, and then moving it. “When asked whether they had voluntarily chosen which finger to move, participants showed no inkling that something other than their will was creating their choice.” Findings such as these suggest that the experience of conscious will may stem from an internal system that is distinct from action itself and the action’s true source. Put differently, willing may be different then acting, and although the experience of both may often be coterminous, they are not necessarily causally related. Furthermore, even when some unappreciated situational force—including the business end of a transcranial magnetic stimulation device—is leading us to act in a particular way, we tend to experience our actions as volitional, willed choices. Again, we miss situation and see disposition.

Based on his review of many such studies, not to mention his own research, Wegner concludes that ourTranscranial Magnetic Stimulation minds produce the experience of conscious will through a process that is independent of the actual cause of our behavior. “[W]e must be careful to distinguish,” Wegner argues, “between . . . empirical will—the causality of the person’s conscious thoughts as established by . . . their covariation with the person’s behavior—and the phenomenal will—the person’s reported experience of will.”

There are, to be sure, times when we experience that we have willed something when, in fact, we have. Foregoing a just-out-of-the-oven chocolate-chip cookie can be, when we succeed, evidence of the empirical will. But the experience of will is not reliable evidence of the empirical will. The experience of will is generated by our minds to accompany behaviors whose source may be unwilled situation. “The experience of will,” as Wegner puts it, “is the way our minds portray their operations to us, not their actual operation.” Wegner’s diagnosis reveals the limited viability of the experience of will as a last bastion of dispositionism.

Though we perceive will, and behave and experience ourselves “as if” our will were controlling our behavior, and though we project will onto the behavior of others, these intuitive conceptions of the will are fundamentally unreliable indicators of both the reality of our will and the source of our behavior. Here again, there is more to the situation:

[T]he brain structure that provides the experience of will is separate from the brain source of action. It appears possible to produce voluntary action through brain stimulation with or without an experience of conscious will. This, in turn, suggests the interesting possibility that conscious will is an add-on, an experience that has its own origins and consequences. The experience of will may not be very firmly connected to the processes that produce action, in that whatever creates the experience of will may function in a way that is only loosely coupled with the mechanisms that yield action itself.

A final experiment suggests the extent to which our experience of will can be subject to situational influence, again without our conscious awareness. Subjects viewed a computer screen that flashed strings of letters and were asked to judge whether they saw words in what flashed. The screen would go entirely blank once each trial, either after the subject pressed the response button, or automatically after a very short time (400-650 milliseconds) if the subject failed to respond. The intervals were so quick that it was difficult for subjects to tell whether their response triggered the blank screen, or whether it had automatically gone blank. One group of subjects, however, was subliminally primed with a flash of the word “I” or “me” (subjects reported not recognizing it) just prior to the flash of letters that they could consciously see and were to evaluate. The researchers found that subjects primed with the dispositionist terms “I” or “me” were more likely to conclude that they had caused the screen to go blank than were subjects who had not been so primed. The subjects, it seems, “were influenced by the unconscious priming of self to attribute an ambiguous action to their own quotation-111807.jpgwill.” Our experience of will, then, is not only an internal illusion; it is an internal illusion that is susceptible to external situational manipulation.

The will, it turns out, rather than being the trump card in the dispositionist’s deck, may be the joker in our dispositional delusion. As Wegner summarizes:

The unique human convenience of conscious thoughts that preview our actions gives us the privilege of feeling we willfully cause what we do. In fact, however, unconscious and inscrutable mechanisms create both conscious thought about action and the action, and also produce the sense of will we experience by perceiving the thought as the cause of the action. So, while our thoughts may have deep, important, and unconscious causal connections to our actions, the experience of conscious will arises from a process that interprets these connections, not from the connections themselves.

We want to emphasize again what we are not claiming, lest our actual claims be wrongly caricatured and dismissed. We have not argued here, or elsewhere in this Article, that there is “no such thing” as will, or that everything we seem to will is, to the contrary, determined for us. We do not doubt the existence of the individual human will, and we do not doubt that there is human genius rightly to be attributed to it. Our point, rather, is that our experience of will—our familiar experience that our will is responsible for our conduct—is often not a reliable indicator of the actual cause of our behavior. The felt experience of will therefore contributes greatly to our dispositionism. Where we are moved situationally, the phenomenon of will fills out our stories and helps to eclipse our vision of the situational influences that move us. When it seems that our “will” is doing the moving, it follows that we must have “chosen” our actions. And if we chose our actions, we must have had reasons or preferences for doing so. Thus, the illusion of will is a central feature of the illusion of dispositionism . How, after all, can situation be moving us, when we can “feel” the disposition?

Our point, then, is both subtle and disquieting: The experienced “will,” rather than a mirror and measure of our true selves, may be a mask in the disguise that keeps us from seeing what really moves us.

* * *

To read Hanson and Yosifon’s law review article from which this excerpt is drawn, go to “The Situational Character.” For a sample of previous posts discussing the role of unconscious and automatic causes of behavior, see “The Situation of Reason,” “The Magnetism of Beautiful People,” and “The Unconscious Genius of Baseball Players.”

Posted in Choice Myth, Social Psychology | 3 Comments »

Deep Capture – Part III

Posted by J on November 19, 2007

“Blind Faith” - Image by Marc Scheff at http://sketchbook.dangermarc.com/This is the third of a multi-part series on what Situationist Contributor David Yosifon and I call “deep capture.” This post, like Part I and Part II, is drawn from our 2003 article, “The Situation” (downloadable here).

The most basic argument behind the prediction of deep capture is that if people are moved by internal and external situation (particularly while believing themselves to be moved primarily by disposition), then, in order to move them, there will be a hard-to-see or hard-to-take-seriously competition over the situation.

Part I of this series explained that our “deep capture” story is very much analogous to the (shallow) capture story told by economists (such as Nobel laureate George Stigler) and public choice theorists for decades regarding the competition over prototypical regulatory institutions. Part II looked to history (specifically, Galileo’s recantation) for another analogy to the process that we claim is widespread today — the deep capture of how we understand ourselves. This post picks up on both of those themes and explains that Stigler’s “capture” story has implications far broader and deeper than he or others realized.

(Situationist artist Marc Scheff is providing the remarkable images at the top of each post in this series.)

* * *

“[T]here have been opened up to this vast and most excellent science, of which my work is merely the beginning, ways and means by which other minds more acute than mine will explore its remote corners.”

–Galileo Galilei

1. Some Deep Implications of Shallow Capture

In identifying the phenomenon of capture, Stigler and his contemporaries obliterated the once-conventional view of regulation. They refuted the naive presumption that had long been protected behind the ambiguous (and, therefore, easily defended) concept of “the public interest,” and provided a far more realistic (albeit disturbing) account of the sources and effects of regulation. Regulation was “caused” less by public-spirited and well-advised regulators and more by the situational constraints imposed upon them by competing economic entities, with the most powerful entities wielding the most influence. In other words, Stigler, identified and substantially overturned what might be called the regulatory fundamental attribution error. The older “public interest” regulatory theory maintained a kind of dispositionist view of a constant figure, evaluating influences, measuring public welfare, and making decisions accordingly. Regulatory theory essentially rested on a view of the regulator as a rational actor whose stable preferences were in the public interest. By studying the regulator’s actions and ignoring the regulator’s words, economists like Stigler were able to see new patterns and surmise some of the situational influences that generated them.

But Stigler’s work barely breaks the surface of situationism and identifies only a very shallow form of capture. When one takes seriously the power of the situation–exterior and interior–one can begin to understand the potential depths of capture. There are several ways in which capture is likely to run much deeper than Stigler, or others applying and advancing his insights, have recognized.

2. The Depth of Capture

Again, returning to Galileo’s story may help make evident what is invisible in our midst. First, as the Catholic Church’s efforts revealed, there are other capture-worthy and capturable institutions and individuals beyond merely administrative regulators. Recall that Galileo had no official regulatory authority either in the state or in the Church. What he had was a certain level of public legitimacy, and therefore power, as a renowned scientist. His theories, evidence, and conclusions were important as a confirmation of, or challenge to, the “truth” of the Church’s teachings. As a result, Galileo’s positions were well worth capturing. Similarly, today any institutions or individuals capable of influencing existing wealth and power distributions will be subject to the pressures of capture. In this sense, Stigler and those who subscribe to his theory are, like the public-interest theorists they replaced, far too shallow.

If administrative regulators are vulnerable to the forces of capture by certain interests, as most everyone agrees they are, then the likelihood of a deeper capture seems undeniable. There is nothing special about administrative regulators–except, perhaps, the general concern that they may be captured. Virtually every other institution in our society seems just as vulnerable. After all, contemporary scholars and commentators have rarely even considered, much less taken seriously, the problem of deep capture. Given that nescience, one would expect other institutions to be constructed without heed to the dynamics of capture. In a world without foxes, a farmer will not guard the hen-house. And because deep capture occurs situationally–outside of view by, and with the induced consent of, the captured–any loss of eggs will either go unnoticed or will be perceived as natural and just.

There is a second general way in which traditional capture theory is too shallow. To see this, it is necessary to look deeper than the behavior of the captured institutions and individuals. Beneath the surface of behavior, the interior situation of relevant actors is also subject to capture. Indeed, much of the power of deep capture comes from the fact that its targets include the way that people think and the way that they think they think.

The Catholic Church would have been far less troubled by Galileo, we suspect, if he had not been writing and publishing his ideas broadly in an attempt to persuade others to reject then-conventional wisdom. Eschewing the scientific conventions of his day, Galileo published many of his discoveries not in Latin but in Italian. He was committed to altering the opinions of people in his society, not simply to recording his measurements for a narrow scientific audience. It was the danger Galileo posed to the Church’s basic knowledge structures–which were embraced by most of the intelligentsia and lay people of the time–that led forces, including vested academic interests, to urge the Church to literally capture Galileo. Galileo’s work went beyond offering a simple challenge to established propositions such as geocentric cosmology; it advocated an entirely different intellectual and moral approach, one that aimed to discredit the “cult” of tradition. Thus, when Galileo advanced heliocentricism, as he did in his famous letter to the Grand Duchess Christina, he did so in the context of a more comprehensive rejection of the view of knowledge as nothing more than a set of pre-ordained revelations:

[W]ho wants the human mind put to death? Who is going to claim that everything in the world which is observable and knowable has already been seen and discovered? . . . [O]ne must not, in my opinion . . . block the way of freedom of philosophizing about things of the world and of nature, as if they had all already been discovered and disclosed with certainty. Nor should it be considered rash to be dissatisfied with opinions which are almost universally accepted . . . .

The message that common sense notions should be challenged was deeply threatening to the Catholic Church of the seventeenth century, which defined faith as it had since the Middle Ages–as obedience to the teachings of religious authorities. The highest crime an individual could commit was that of heresy–the word itself deriving from the Greek word hairesis, meaning “choice.” In order to prevent the wider populace from realizing that a “choice” existed, Galileo had to be silenced.

Those in power thus captured the institutions and individuals that threatened their dominant position, including an individual scientist capable of altering ideas or knowledge in a way that might weaken their power. They did so through a process intended to suggest that Galileo freely chose his recantation and resultant silence. Galileo, wisely, did not proclaim that he was being forced to recant under the threat of death; he stated instead that he was trying to clarify the possible confusion that his errors had created and make clear that he, upon reflection, “abjure[d], curse[d], and detest[ed] the above-mentioned errors and heresies . . . .” The Church thus applied situational pressure to generate the appearance of “dispositional” recantation. And the people at that time, inasmuch as their knowledge structures and understanding of the world were influenced by the Church, and insofar as the Church managed to squelch other ideas or knowledge structures, were also deeply captured.

Understanding that capture is directed at both our exteriors and interiors clears up some confusion andquotation5.jpg debate in the shallow capture literature. When Stigler’s evidence of capture emerged, economists, political scientists, and public choice theorists got busy trying to identify the precise mechanics of the regulatory black box that Stigler mostly ignored. True to form, they began with the rational actor model of human behavior and sought to explain capture as the consequence of the self-interested, maximizing dispositions of individual regulators. Yet, while simple formulations have given way to increasingly elaborate ones, public-choice theory is still dogged by the fact that it is unrealistically “cynical” (meaning that the assumed dispositions of regulatory actors are perceived to be unrealistically selfish). After all, many governmental actors and regulatory agents often claim, and actually seem to be, motivated by the public interest and try to act that way; that is, many regulators’ actions appear more consistent with their ideological beliefs than with a narrow conception of self-interest.

The problem with shallow capture is not that it cannot always explain the part played by the dispositions of regulatory actors, but rather that it takes dispositions so seriously in the first place. Deep capture makes clear that people’s intentions and beliefs may have little to do with their behavior and that, insofar as they do, those intentions and beliefs are part of what interests compete to capture.

When Catholic astronomers of the seventeenth century stated that they believed, as most profoundly did, that the Earth was at the center of the universe, deep capture was at work. Their astronomy was part of a larger, interconnected set of truths taught to them in seminary and reinforced at many turns–some seen, some unseen–in their society. Similarly, lay people had no reason to dispute those truths and faced situational influences just as powerful, despite being less visible, as the gun to the head or fire to the feet that Galileo experienced. That a regulator may act out of ideological dispositions no more implies that she is free from capture than the changing lengths of shadows on a summer afternoon implies that the sun is revolving around the Earth.

The question that should be asked is not: “Who among the regulators is corrupt or so selfishly motivated as to disregard the ‘public interest?”‘ The question that should be asked is: “Who among us is the most powerful and most capable of deeply capturing our exteriors and interiors and, even, of capturing what we mean by the ‘public interest?”‘

3. The Invisibility of Capture

By “deep capture,” then, we are referring to the disproportionate and self-serving influence that the relatively powerful tend to exert over all the exterior and interior situational features that materially influence the maintenance and extension of that power–including those features that purport to be, and that we experience as, independent, volitional, and benign. Because the situation generally tends to be invisible (or nearly so) to us, deep capture tends to be as well.

This raises the question: if deep capture is so hard to see, then why is it so obvious in the Galileo example? There are several reasons. To begin with, at the time, we doubt that it was so visible. We suspect that few observers saw anything untoward or illegitimate about Galileo’s inquisitorial experience or any reason to doubt the “knowledge” that it produced. The situational pressures that, to us, were glaringly excessive during the Inquisition were probably not perceived as excessive at the time.

The situational forces confronting Galileo may be easier for us to see now because we live in a radically different environment. We are looking at another generation of people in another country whose situational worldviews we reject and whose victim, Galileo, we revere. They are “them,” and Galileo is “us.” People are motivated to attribute bad outcomes to out-group members. The contrast is heightened by the historical construction of the event as a lesson on the horrors of the Inquisition and the dangerous distortions that result when religion is allowed to dominate (or, we might say, “capture”) science. The role of disposition and deep capture in the Galileo story is, today and to us, conspicuous, almost palpable. But seeing our own situation and its deep capture is not.

* * *

Part IV of this series takes up that difficult challenge of looking for the deep capture of “us.”

Posted in Choice Myth, Deep Capture, History, Legal Theory | 7 Comments »

The Body Has a Mind of its Own

Posted by The Situationist Staff on November 18, 2007

Cover The Body Has a Mind of Its OwnThe Situationist Staff is recommending a fascinating new book, The Body Has a Mind of Its Own, co-authored by the mother-son team Sandra Blakeslee, who is a New York Times science contributor, and science writer Matthew Blakeslee.

The book’s website opens with this teaser: “Your body has a mind of its own. You know it’s true. You can feel it, you can sense it, even though it may be hard to articulate. You know your body is more than just a meat-vehicle for your mind to cruise around in, but how deeply are mind, brain and body truly interwoven? Take a moment to ask yourself: How do you know you have a body? What gives you your sense of being in charge of it, and how real, how robust, how fragile is that sense? How does your mind know where your body ends and the outside world begins? Answers can be found in the emerging science of body maps . . . .” (You can read more here.)

The Body Has a Mind of Its Own has been very favorably received and highly acclaimed. Nature had this to say:

[The book provides] some of the most exciting discoveries in neuroscience. The unifying theme is the idea that the way our body is mapped by neural circuits in the brain can account for a range of our experiences and perceptions. Using a readable and inspiring format, the authors showcase new and classic research on neural representations, without compromising accuracy . . . . Anecdotes and ideas from sister disciplines, including neurology, psychiatry and cultural anthropology, mix comfortably with laboratory observations. New discoveries titillate our curiosity, explaining common phenomena such as yo-yo dieting and contagious yawning as well as some more bizarre neurological abnormalities such as alien-hand syndrome and supernumerary-limb perception. Also covered are why you cannot tickle yourself, why some people have ‘out-of-body’ experiences, and why babies in Mali walk earlier than those anywhere else in the world . . . . The Body has a Mind of its Own is a thought-provoking book of wide appeal. It is a striking example of how complex issues in contemporary research can be presented to entertain everyone.

We are happy to climb aboard the bandwagon. To provide our readers a better sense of the book we’ve excerpted a terrific review by Wray Herbert written for the Washington Post. (Herbert is himself one of the best writers about mind-science matters, the main blogger at We’re Only Human, and author of the “Mind Matters” column for Newsweek.com.)

* * *

Sometimes, science advances on luck, and so it was with the monkey and the ice cream cone. On a summer day in 1991, neuroscientists in a laboratory at Parma University had wired up a monkey’s brain for a simple experiment. They wanted to see which neurons fired during the series of movements involved in the everyday act of drinking from a cup: the reaching, the finger curling, the grasping and so forth. But on that day the monkey was more interested in a student eating an ice cream cone. The monkey watched intently as the student moved the cone to his mouth and, as it watched, the motor neurons in its brain began to fire. The firing was a classic neurological signature. It indicated that the animal was moving its arms and hands — but in fact the monkey was quite still.icecream-cone.jpg

What the Italian scientists were witnessing was the first evidence for what are now known as “mirror neurons”: specialized cells in the brain that inextricably link intention with movement and perception. They explain why yawns are contagious, why sports fans contort their bodies in unison, and — on a more profound level — why one human being can empathically “feel” the distress or joy of another. They are nothing less than the neurological foundation for all human connection.

. . . Sandra Blakeslee and Matthew Blakeslee . . . tell the monkey story in a late chapter of The Body Has a Mind of Its Own, their captivating exploration of the brain’s uncanny ability to map the world. The discovery of mirror neurons, while fortuitous, was actually the culmination of decades of painstaking scientific inquiry in dozens of brain laboratories around the world, and the authors take us into those labs to watch this important scientific tale unfold. They also take us on a tour through the squishy gray matter that embodies our sense of self and otherness.

* * *
. . . The brain has lots of built-in maps, and they don’t stop at our toes and fingertips. Indeed, every time we put on a piece of jewelry or pick up a hoe or scribble with a fountain pen, the brain incorporates those objects — and the space they occupy — into our personal maps. That’s why you can actually perceive the texture of a steak through a knife and fork, and never mistake it for Jello. From the brain’s perspective, those tools are simply extensions of you.

* * *

. . . The authors have essayed some difficult terrain here and, for the most part, with clarity. They know the inner workings of both the scientific laboratory and the brain and wisely keep their heady subject matter anchored in those worlds. Readers will emerge with a far keener sense of where they are.

* * *

To read all of Wray Herbert’s Washington Post review, click here.

Posted in Book, Life | 1 Comment »

Wise Parents Don’t Have “Smart” Kids

Posted by The Situationist Staff on November 16, 2007

ny-mag-cover-praise.jpgWe have previously devoted several posts to the powerful effect of self-schemas personal narratives and, more specifically, to the remarkable research by Carol Dweck regarding the importance of how people think about intelligence and learning. In February of this year, Po Bronson wrote a terrific article in New York Magazine, in which he provided a delightful and accessible summary of research by Dweck and her colleagues illustrating the “inverse power of praise.” We excerpt portions of Bronson’s article below.

* * *

What do we make of a boy like Thomas?

Thomas (his middle name) is a fifth-grader at the highly competitive P.S. 334, the Anderson School on West 84th. . . . Thomas hangs out with five friends from the Anderson School. They are “the smart kids.” Thomas’s one of them, and he likes belonging.

Since Thomas could walk, he has heard constantly that he’s smart. Not just from his parents but from any adult who has come in contact with this precocious child. When he applied to Anderson for kindergarten, his intelligence was statistically confirmed. The school is reserved for the top one percent of all applicants, and an IQ test is required. Thomas didn’t just score in the top one percent. He scored in the top one percent of the top one percent.

But as Thomas has progressed through school, this self-awareness that he’s smart hasn’t always translated into fearless confidence when attacking his schoolwork. In fact, Thomas’s father noticed just the opposite. “Thomas didn’t want to try things he wouldn’t be successful at,” his father says. “Some things came very quickly to him, but when they didn’t, he gave up almost immediately, concluding, ‘I’m not good at this.’ ” With no more than a glance, Thomas was dividing the world into two—things he was naturally good at and things he wasn’t.

* * *

Why does this child, who is measurably at the very top of the charts, lack confidence about his ability to tackle routine school challenges?

Thomas is not alone. For a few decades, it’s been noted that a large percentage of all gifted students (those who score in the top 10 percent on aptitude tests) severely underestimate their own abilities. Those afflicted with this lack of perceived competence adopt lower standards for success and expect less of themselves. They underrate the importance of effort, and they overrate how much help they need from a parent.

When parents praise their children’s intelligence, they believe they are providing the solution to this problem. According to a survey conducted by Columbia University, 85 percent of American parents think it’s important to tell their kids that they’re smart. . . . The constant praise is meant to be an angel on the shoulder, ensuring that children do not sell their talents short.

Smart Kids

But a growing body of research—and a new study from the trenches of the New York public-school system—strongly suggests it might be the other way around. Giving kids the label of “smart” does not prevent them from underperforming. It might actually be causing it.

For the past ten years, psychologist Carol Dweck and her team at Columbia (she’s now at Stanford) studied the effect of praise on students in a dozen New York schools. Her seminal work—a series of experiments on 400 fifth-graders—paints the picture most clearly.

Dweck sent four female research assistants into New York fifth-grade classrooms. The researchers would take a single child out of the classroom for a nonverbal IQ test consisting of a series of puzzles—puzzles easy enough that all the children would do fairly well. Once the child finished the test, the researchers told each student his score, then gave him a single line of praise. Randomly divided into groups, some were praised for their intelligence. They were told, “You must be smart at this.” Other students were praised for their effort: “You must have worked really hard.”

Why just a single line of praise? “We wanted to see how sensitive children were,” Dweck explained. “We had a hunch that one line might be enough to see an effect.”

Then the students were given a choice of test for the second round. One choice was a test that would be more difficult than the first, but the researchers told the kids that they’d learn a lot from attempting the puzzles. The other choice, Dweck’s team explained, was an easy test, just like the first. Of those praised for their effort, 90 percent chose the harder set of puzzles. Of those praised for their intelligence, a majority chose the easy test. The “smart” kids took the cop-out.

Why did this happen? “When we praise children for their intelligence,” Dweck wrote in her study summary, “we tell them that this is the name of the game: Look smart, don’t risk making mistakes.” And that’s what the fifth-graders had done: They’d chosen to look smart and avoid the risk of being embarrassed.

In a subsequent round, none of the fifth-graders had a choice. The test was difficult, designed for kids two years ahead of their grade level. Predictably, everyone failed. But again, the two groups of children, divided at random at the study’s start, responded differently. Those praised for their effort on the first test assumed they simply hadn’t focused hard enough on this test. “They got very involved, willing to try every solution to the puzzles,” Dweck recalled. “Many of them remarked, unprovoked, ‘This is my favorite test.’ ” Not so for those praised for their smarts. They assumed their failure was evidence that they weren’t really smart at all. “Just watching them, you could see the strain. They were sweating and miserable.”

Having artificially induced a round of failure, Dweck’s researchers then gave all the fifth-graders a final round of tests that were engineered to be as easy asbrain-teaser.jpg the first round. Those who had been praised for their effort significantly improved on their first score—by about 30 percent. Those who’d been told they were smart did worse than they had at the very beginning—by about 20 percent.

Dweck had suspected that praise could backfire, but even she was surprised by the magnitude of the effect. “Emphasizing effort gives a child a variable that they can control,” she explains. “They come to see themselves as in control of their success. Emphasizing natural intelligence takes it out of the child’s control, and it provides no good recipe for responding to a failure.”

In follow-up interviews, Dweck discovered that those who think that innate intelligence is the key to success begin to discount the importance of effort. I am smart, the kids’ reasoning goes; I don’t need to put out effort. Expending effort becomes stigmatized—it’s public proof that you can’t cut it on your natural gifts.

* * *

. . . [T]eachers at the Life Sciences Secondary School in East Harlem . . . [have] seen Dweck’s theories applied to their junior-high students. Last week, Dweck and her protégée, Lisa Blackwell, published a report in the academic journal Child Development about the effect of a semester-long intervention conducted to improve students’ math scores.

life-sciences-secondary-school.jpgLife Sciences is a health-science magnet school with high aspirations but 700 students whose main attributes are being predominantly minority and low achieving. Blackwell split her kids into two groups for an eight-session workshop. The control group was taught study skills, and the others got study skills and a special module on how intelligence is not innate. These students took turns reading aloud an essay on how the brain grows new neurons when challenged. They saw slides of the brain and acted out skits. . . . After the module was concluded, Blackwell tracked her students’ grades to see if it had any effect.

It didn’t take long. The teachers—who hadn’t known which students had been assigned to which workshop—could pick out the students who had been taught that intelligence can be developed. They improved their study habits and grades. In a single semester, Blackwell reversed the students’ longtime trend of decreasing math grades.

The only difference between the control group and the test group were two lessons, a total of 50 minutes spent teaching not math but a single idea: that the brain is a muscle. Giving it a harder workout makes you smarter. That alone improved their math scores.

* * *

Scholars from Reed College and Stanford reviewed over 150 praise studies. Their meta-analysis determined that praised students become risk-averse and lack perceived autonomy. The scholars found consistent correlations between a liberal use of praise and students’ “shorter task persistence, more eye-checking with the teacher, and inflected speech such that answers have the intonation of questions.”

Dweck’s research on overpraised kids strongly suggests that image maintenance becomes their primary concern—they are more competitive and more interested in tearing others down. A raft of very alarming studies illustrate this.

In one, students are given two puzzle tests. Between the first and the second, they are offered a choice between learning a new puzzle strategy for the second test or finding out how they did compared with other students on the first test: They have only enough time to do one or the other. Students praised for intelligence choose to find out their class rank, rather than use the time to prepare.

In another, students get a do-it-yourself report card and are told these forms will be mailed to students at another school—they’ll never meet these students and don’t know their names. Of the kids praised for their intelligence, 40 percent lie, inflating their scores. Of the kids praised for effort, few lie.

When students transition into junior high, some who’d done well in elementary school inevitably struggle in the larger and more demanding environment. Those who equated their earlier success with their innate ability surmise they’ve been dumb all along. Their grades never recover because the likely key to their recovery—increasing effort—they view as just further proof of their failure. In interviews many confess they would “seriously consider cheating.”

* * *

. . . sounds awfully clichéd: Try, try again.

But it turns out that the ability to repeatedly respond to failure by exerting more effort—instead of simply giving up—is a trait well studied in psychology. People with this trait, persistence, rebound well and can sustain their motivation through long periods of delayed gratification. Delving into this research, I learned that persistence turns out to be more than a conscious act of will; it’s also an unconscious response, governed by a circuit in the brain. Dr. Robert Cloninger at Washington University in St. Louis located the circuit in a part of the brain called the orbital and medial prefrontal cortex. It monitors the reward center of the brain, and like a switch, it intervenes when there’s a lack of immediate reward. When it switches on, it’s telling the rest of the brain, “Don’t stop trying. There’s dopa [the brain’s chemical reward for success] on the horizon.” While putting people through MRI scans, Cloninger could see this switch lighting up regularly in some. In others, barely at all.

What makes some people wired to have an active circuit?

Cloninger has trained rats and mice in mazes to have persistence by carefully not rewarding them when they get to the finish. “The key is intermittent reinforcement,” says Cloninger. The brain has to learn that frustrating spells can be worked through. “A person who grows up getting too frequent rewards will not have persistence, because they’ll quit when the rewards disappear.”

* * *

Offering praise has become a sort of panacea for the anxieties of modern parenting. Out of our children’s lives from breakfast to dinner, we turn it up a notch when we get home. In those few hours together, we want them to hear the things we can’t say during the day—We are in your corner, we are here for you, we believe in you.

In a similar way, we put our children in high-pressure environments, seeking out the best schools we can find, then we use the constant praise to soften the intensity of those environments. We expect so much of them, but we hide our expectations behind constant glowing praise.

* * *

Towards the end of his article, Bronson quotes Situationist contributor Mazharin Banaji as saying, “Carol Dweck is a flat-out genius. I hope the work is taken seriously.” We agree, but hasten to add that by “genius,” we don’t mean “smart.” Professor Dweck’s genius is measured in terms of perseverance and hard work!

To read Po Bronson’s fascinating article in its entirety, click here. To read about Po Bronson’s current writing projects, go to his website, pobronson.com.

For previous, related Situationist posts, go to “How Situational Self-Schemas Influence Disposition” (which includes a video of Carol Dweck), “The Perils of Being Smart,” “Jock or Nerd,” and “First Person or Third.”

Posted in Education, Life, Social Psychology | 5 Comments »

The (Unconscious) Situation of our Consciousness – Part I

Posted by The Situationist Staff on November 15, 2007

In their 2003 article, “The Situational Character,” Situationist contributors Jon Hanson and David Yosifon summarized some of the evidence indicating that we greatly overestimate the role of our consciousness and of our will.

Over the next few weeks, we will offer a series of posts containing, not only Hanson and Yosifon’s general summary, but also other summaries of the more recent research on the hidden situation of our consciousness. This post begins with Hanson and Yosifon’s summary of some of the fascinating research on “automaticity” and the illusion of conscious will. As they argue, the failure to appreciate the ubiquity of automaticity and the illusion of conscious will are major contributors to the dispositionist deception — the sense that, not situation, but our conscious will, is calling the shots and pulling the levers of our own behavior.

* * *

. . . Daniel Wegner concludes, in a book that brings together generations of experimental research on the felt experience of human will, that “conscious will is an illusion. It is an illusion in the sense that the experience of consciously willing an action is not a direct indication that the conscious thought has caused the action.” Two other leading researchers of the will, [Situationist contributor] John Bargh and Tanya Chartrand, have made an extremely compelling, if unsettling, case that “most of a person’s everyday life is determined not by [her] conscious intentions and deliberate choices but by mental processes that are put into motion by features of the environment and that operate outside of conscious awareness and guidance”—a thesis that they acknowledge is “difficult . . . for people to accept.”

In part for that reason, we want to be certain that the claim is not misconstrued. None of the researchers in this field of social science have concluded, nor do we, that the “conscious will” is purely and totally an illusion. What is asserted—and what researchers have demonstrated—is that the experience of will is far more widespread than the reality of will. Wegner calls the latter the empirical will and argues that our perceived will is often an unreliable and misleading basis for understanding our behavior. The experience of will occurs often without empirical will, and thus creates the illusion of will. Moreover, it contributes to the illusions of choice, preference, and, more generally, dispositionism.

Automaticity

Exhibit A in the case that our conscious will is not as central as we presume is the fact that our conscious attentional capacity is extraordinarily limited. Remember your first attempt at driving a manual transmission automobile—before the processes became automatic. If you are like the stick shiftauthors, the memory still causes some embarrassment. Images of stalling, chugging, and squealing evince the limits of our ability to tell our feet and legs—much less the car—precisely how to behave. Now suppose that, at the same time you were attempting to let the clutch out with your left foot while depressing the gas pedal with your right, you were attempting to have a serious phone conversation with a friend about, say, your love problems. Such multitasking would be all but impossible given the severe limits on our ability to be consciously attentive.

The point has been demonstrated in numerous experiments. For instance, studies have shown that eating radishes instead of available chocolates depletes one’s ability to persist in attempting to solve puzzles, and that suppressing emotional reactions to a movie depletes one’s ability to solve anagrams or to squeeze a handgrip exerciser. The unhappy truth is that because “even minor acts of self-control, such as making a simple choice, use up [one’s] limited self-regulatory resource, conscious acts of self-regulation can occur only rarely in the course of one’s day.” Social psychologists studying the phenomenon have concluded that, in our daily lives, our conscious will “plays a causal role only [five percent] or so of the time.” Little wonder that the growing popularity of cell phones has made driving generally more dangerous, even for experienced drivers.

Exhibit B in the case for automaticity is the now-cascading evidence demonstrating the extent to which our choice biases, our schemas, our memories, our attributions, our affective responses, our motives, our perceptions, and so on are activated automatically—outside our conscious awareness, and often by exterior situational features and events. The evidence of implicit attitudes summarized above is just a small strand of the larger fabric of automaticity operating within our interiors.

There is also mounting evidence that our automatic perceptions are linked to our behavior, also through automatic means. Charles Carver and his colleagues, for instance, found that subjects participating as the “teacher” in a Milgram-esque experiment tended to give longer shocks when they had been primed with a list of hostility-related words. More recently, John Bargh and his collaborators have made numerous demonstrations of the automatic perception-behavior link. In one experiment, for example, some subjects were primed with words related to rudeness, others, with words related to politeness. The subjects were then placed in a situation that presented both an opportunity and motive to interrupt an ongoingimage from Ferrari et al., PLoS Biology conversation. The first, rude-primed group interrupted more than sixty percent of the time, while the second, polite-primed group interrupted less than twenty percent of the time. In other studies, subjects primed with stereotypical qualities of elderly people (e.g., wrinkles, Florida) behaved more like elderly people—walked more slowly, were more forgetful, and so on—than subjects who were not similarly primed. And Chartrand and Bargh have shown in other experiments that, without being aware of it, subjects often engage in so-called “behavior matching,” or the “chameleon effect.” For instance, when subjects are placed next to an interaction partner who is either rubbing his or her face or shaking his or her foot, the subjects tend to engage in behavioral patterns matching those of their interaction partner.

But the automaticity doesn’t stop there. Although we sometimes intentionally try to transform our conscious acts into automatic behavior—recall how you practiced playing the piano, dribbling a basketball, or driving that darn stick shift—much of what becomes automatic does so automatically. And that includes many of our goals and motivations. In one study, for instance, subjects were asked to rearrange scrambled words to make a sentence. Some subjects were nonconsciously primed to succeed because the words included items like “strive,” “achieve,” and “succeed.” Others were given neutral words that would not prime the goal to achieve. All of the subjects then were given a second, timed task—to rearrange letters in words to create new words. The anagrams ranged from simple to impossible. After completing the anagrams or running out of time, the subjects filled out questionnaires about their moods. Subjects who were not primed to succeed reported similar moods whether they performed well or poorly. The moods of the subjects who were primed to succeed, however, varied depending on whether they succeeded or failed. That is, they seemed to care about how well they performed, even though they were unaware of what caused their moods, much less that it was the success-oriented words they encountered in the first task. Subjects were, beneath their conscious radars, given a goal that they did not even know they had, and that goal remained, hidden in their interior situation, shaping their moods in ways they neither saw nor appreciated.

Our automatic goals are quite pervasive. When we commonly adopt a particular goal in a given situation—be it the workplace, the classroom, or the ping-pong table—that goal is likely to be triggered automatically in that situation, whether or not we want it to be triggered. As with all evidence of situational influence, such automatic goal-setting and mood-effect evidence further reveals the extent to which we humans are susceptible to situational manipulation.

* * *

To read Hanson and Yosifon’s law review article from which this excerpt is drawn, go to “The Situational Character.” For a sample of previous posts discussing the role of unconscious and automatic causes of behavior, see “The Situation of Reason,” “The Magnetism of Beautiful People,” and “The Unconscious Genius of Baseball Players.”

Posted in Choice Myth, Social Psychology | 1 Comment »

Situationist Theories of Hate – Part IV

Posted by The Situationist Staff on November 13, 2007

//bilalseoexpert.stumbleupon.com/tag/photography/Social Psychologist Alexander Gunz recently published a thoughtful and surprisingly fun summary of social psychological theories about why we humans seem to “hate” so readily and so often. The full article is in the latest edition of In-Mind, which we highly recommend.

We are excerpting portions of Gunz’s informative and entertaining article in this series of posts. Part I provided Gunz’s introduction to the topic of hate and a brief overview of the personality-based explanation by social psychologists (which we would describe as “internal situational” sources of hate. Part II included Gunz’s discussion of some of the external situational sources of hate first discovered by social psychologists. Part III discussed reasons why we hate and why we think we hate. This part, the final in the series, picks up there by discussing some of the new-fangled forms of hate, which help us protect our affirming self-image as people who don’t experience prejudice and then summarizes some reasons for hope.

* * *

Why We Think We Hate – Me? Prejudiced?

Currently in North America, the predominant belief about “why we hate” is that “we don’t.” When asked “who are you prejudiced against?” most people respond as if the question was about their predilection for eating puppies.

Of course, most understand “prejudice,” here, to be synonymous with “hating ethnic minorities,” with a sideline in hating gays, and sometimes women. Christian Crandall (2002) points out that prejudice comes in a continuum, ranging from not-at-all hated groups (e.g., nurses) to very slightly disliked ones (e.g., Americans/Canadians, depending which side of the border you live on), to more openly disliked groups (e.g., prostitutes, gambling addicts), to the outright reviled (e.g., child molesters, rapists). But what about ethnic groups? Is prejudice against them dead? Adults may use more sophisticated epithets than ‘smelly’ (well, sometimes), but do we have more in common with Sherif’s boys than we care to admit?

The last half century has seen a steady decline in racial stereotyping — or at least, the type people admit to on surveys. There is a fair bit of regional variation in this of course, with equality being more fashionable in some places than others. . . .

But is prejudice really clearing up completely, if only in the staunchest bastions of egalitarianism? The late eighties saw several broadly similar theories emerge, each describing people as being conflicted over the expression of prejudice. Prejudiced actions would only emerge, these theories claimed, when they could somehow be coded (ambivalent racism theory), explained away (aversive racism theory), or when conflicting egalitarian beliefs were out of mind (symbolic racism theory).

Ambivalent racism theory argued that while “old fashioned” blatant hatred may be on the wane, its more subtle cousin, resentment, often creeps in to fill the hole. Reasoning along these lines McConahay invented the enormously influential modern racism scale, which aimed not directly at prejudice itself, but indirectly at people dragging their feet over steps to oppose prejudice. His scale quizzed people on issues such as whether Blacks were getting too pushy for civil rights, and whether Blacks’ anger was really so justified.

Gaertner and Dovidio (1986) took a subtly different approach, arguing that people don’t so much code their prejudices, as they acquire highly aversive feelings when those prejudices emerge too blatantly. Among the enormous volumes of evidence they accumulated for this aversive racism theory is one study that illustrates the difference particularly well. At an American university they found a significant drop in the amount of prejudice shown on the Modern Racism Scale between 1988 and 1999. On the surface of things, it seemed, progress was being made. But a second test showed far less encouraging results.

They asked students to evaluate a White or Black job candidate who was given credentials that were varied to be either weak, middling, or strong. The candidate’s race made no difference when his credentials were weak or strong. Nobody felt they could justify hiring a weak White candidate, or blatantly rejecting a strong Black one. But when he was given middling credentials, students had some wiggle room, with plausible reasons to hire or fire either way. In both 1988 and 1999 students said a middling candidate should be hired far less often when a photograph showed him to have Black skin rather than White. The only time race influenced people’s action was when they were able to plausibly claim that it hadn’t.

Is There No Hope?

From Chapter 7 - Robber’s CaveSherif’s attempts to rile prejudice worked better than he had imagined they would, but so too did the last phase in his camp experiment, which I haven’t told you about yet.

He rigged a number of events in which the Eagles and Rattlers were obliged to work together to achieve larger goals. For example, he blocked up the entire camp’s water supply with an artfully placed sack, blaming the problem on “vandals.” The two groups investigated, and converged on the “broken” faucet, which they then struggled together to fix. Final success brought universal celebration. In another event, Sherif sabotaged their bus, and the boys had to use their tug of war rope to start it again – everyone pulling, for once, in the same direction on it.

Food fights in the cafeteria stopped, tauntings dropped right off, and on the last day of camp they overwhelmingly voted to go home on the same bus together. At a stop on the way home, the Rattlers even volunteered to use one of their $5 prizes up buying a round of malted milks for everyone.
Prejudices, it seems, are more malleable than the people holding them tend to think. . . . Ever since Sherif’s experiment, psychologists have wondered about the best way to help such thaws along. Recently psychologists Thomas Pettigrew and Linda Trop (2006) gathered the results from hundreds of studies on this question (covering thousands of people), and used complex “meta analysis” statistics to take a powerful new look at the collected results.

What they found is strong support for the ‘contact hypothesis’ – that personal contact between group members helps improve feelings. Contact even works substantially better when a number of conditions are present. From what you’ve heard so far, you won’t be surprised to know that it helps to have a shared goal to work towards (like getting your bus unstuck), and that it is good to have a shared outgroup to rally against (“stupid vandals”). Other things help too, though, such as having the contact occur on an equal footing, with no group having higher status than the other.

Conclusion

Our conviction over the years that TBOT [i.e., “those bastards over there”] are jerks has been matched in its consistency only by our inability to keep straight exactly who TBOT are. Three hundred years ago the French were popular in America as allies in the American Revolution; one hundred years ago Italians were looked down on as unwelcome American immigrants. Of late, Italians are considered non-specifically White, whereas the French have been castigated with outbursts of “freedom fry” munching spite by Americans who were upset that they weren’t doing their part to fight an even newer TBOT. If probed, many of these same Americans (of either period) will happily claim that they dislike the jerks they do, because, well,janitor.jpg “everyone knows” that “that’s the way it’s always been.”

You may recall Muzafer Sherif ran his summer camp disguised as a janitor, but you may not have realized why. What Sherif knew was that boys will clam up instantly on sight of a grown up, but people will say almost anything when only the janitor is present. Janitors aren’t real people, you understand.

There is an old saying that you don’t understand anyone until you have walked a mile in their shoes. Sherif wore the shoes, shirt, slacks, and even pushed the broom. Maybe if the rest of us spent more time wearing the shoes of those we tread underfoot, there would be less hate in the world. Maybe, but prejudice is a remarkably consistent human passion.

References
Crandall, C. S., Eshleman, A., & O’Brien, L. (2002). Social norms and the expression and suppression of prejudice: The struggle for internalization. Journal of Personality & Social Psychology, 82, 359-378.

Gaertner, S. L., & Dovidio, J. F. (1986). The aversive form of racism. In J. F. Dovidio & S. L. Gaertner (Eds.), Prejudice, discrimination, and racism (pp. 61-89). San Diego, CA: Academic Press, Inc.

McConahay, J. B. (1986). Modern racism, ambivalence, and the Modern Racism Scale. In J. F. Dovidio & S. L. Gaertner (Eds.), Prejudice, discrimination, and racism (pp. 91-125). San Diego, CA: Academic Press, Inc.

Pettigrew, T. F., & Tropp, L. R. (2006). A Meta-Analytic Test of Intergroup Contact Theory. Journal of Personality & Social Psychology, 90, 751-758.

Sherif, M. (1961). Intergroup Conflict and Cooperation; The Robbers Cave Experiment. Norman, Oklahoma: University Book Exchange. Retrieved from http://psychclassics.yorku.ca/Sherif/index.htm.

* * *

Again, to read the entirety of Gunz’s illuminating and engaging article, go to the latest edition of In-Mind.

Posted in Conflict, Emotions, Social Psychology | 4 Comments »

The Situation of Ideology – Part I

Posted by The Situationist Staff on November 12, 2007

from www.misswit.net/ideology.htmlNumerous Situationist conributors — including, Mahzarin Banaji, Adam Benforado, Susan Fiske, Jon Hanson, John Jost, Brian Nosek, and Emily Pronin — have been studying and writing about the situational sources of ideology. With the 2008 presidential campaign picking up steam and with the political divisions apparently growing in depth and distance, that research seems particularly pertinent. It’s for that reason that the theme of the March 8 conference hosted by the Project on Law and Mind Sciences will be “Ideology, Psychology, and Law.”

Several popular articles have been written about the psychology of ideology over the last year. In this series, we will excerpt several of those articles. We open with a New York Times piece from February by Patricia Cohen, entitled “Across the Great Divide: Links Between Personality and Politics.”

* * *

Folk music and a collection of feminist poetry may well be dead giveaways that there is a liberal in the house. But what about an ironing board or postage stamps or a calendar?

What seem to be ordinary, everyday objects to some people can carry a storehouse of information about the owner’s ideology, says a new wave of social scientists who are studying the subtle links between personality and politics.

Research into why someone leans left or right — a subject that stirred enormous interest in the aftermath of World War II before waning in the 1960s — has been revived in recent years, partly because of a shift in federal funds for politics and terrorism research, new technology like brain imaging and a sharper partisan divide in the nation’s political culture.

“I believe that recent developments in psychological research and the world of politics — including responses to 9/11, the Bush presidency, the Iraq War, polarizing Supreme Court nominations, Hurricane Katrina, and ongoing controversies over scientific and environmental policies — provide ample grounds for revisiting” the psychological basis of Americans’ opinions, party and voting patterns, [Situationist contributor] John T. Jost, a psychologist at New York University, wrote in a recent issue of American Psychologist.

The newest work in the field, found in a growing number of papers, symposiums and college courses, touches on factors from genetics to home décor. . . .

For anyone who assumes political choices rest on a rational analysis of issues and self-interest, the notion that preference for a candidate springs from the same source as the choice of a color scheme can be disturbing. But social psychologists assume that all beliefs, including political ones, partly arise from an individual’s deep psychological fears and needs: for stability, order and belonging, or for rebellion and novelty.

These needs and worries vary in degree, develop in childhood and probably have a temperamental and a genetic component, said Arie Kruglanski of the University of Maryland. A study of twins, for instance, has shown that a conservative or progressive orientation can be inherited, while a decades-long study has found that personality traits associated with liberalism or conservatism later in life show up in preschoolers.

No one is arguing that an embrace of universal national health care or tax cuts arises because of a chromosome or the unconscious residue from a schoolyard spat. What Mr. Jost and Mr. Kruglanski say is that years of research show that liberals and conservatives consistently match one of two personality types. Those who enjoy bending rules and embracing new experiences tend to turn left; those who value tradition and are more cautious about change tend to end up on the right.

//homepage.psy.utexas.edu/homepage/faculty/Gosling/physical_environments.htm

What’s more, these traits are reflected in musical taste, hobbies and décor. Dana R. Carney, a postdoctoral fellow at Harvard University, who worked with Mr. Jost and Samuel D. Gosling of the University of Texas at Austin among others, found that the offices and bedrooms of conservatives tended to be neat and contain cleaning supplies, calendars, postage stamps and sports-related posters; conservatives also tended to favor country music and documentaries. Bold-colored, cluttered rooms with art supplies, lots of books, jazz CDs and travel documents tended to belong to liberals (providing sloppy Democrats with an excuse to refuse clean up on principle).

Jonathan Haidt, a social psychologist at the University of Virginia, said he found this work intriguing but was more inclined to see a person’s moral framework as a source of difference between liberals and conservatives. Most liberals, he said, think about morality in terms of two categories: how someone’s welfare is affected, and whether it is fair. Conservatives, by contrast, broaden that definition to include loyalty, respect for authority, and purity or sanctity. Conservatives have a richer, more elaborate moral2004-red-blue-map.jpg horizon than liberals, Mr. Haidt said, because there is a “whole dimension to human experience best described as divinity or sacredness that conservatives are more attuned to.”

So how does he explain the red-blue divide? “Areas with less mobility and less diversity generally have the more traditional,” broadened definition of morality, “and therefore were more likely to vote for George W. Bush — and to tell pollsters that their reason was ‘moral values,’ ” he and his co-writer, Jesse Graham, say in a paper to be published this year by The Journal Social Justice Research.

Mr. Jost did his own research on the red-blue divide. Using the Internet he and his collaborators gave personality tests to hundreds of thousands of Americans. He found states with people who scored high on “openness” were significantly more likely to have voted for the Democratic candidate in the past three elections, even after adjustments were made for income, ethnicity and population density. States that scored high on “conscientiousness” went Republican in the past three elections.

* * *

To read the entire article, click here. For two previous Situationist posts on the situation of ideology, see “Ideology is Back” (by John Jost) and “Ideology Shaping Situation of Vice Versa.” For an interview of Patricia Cohen and Jesse Graham on WNYC Public Radio, click here.

Posted in Conflict, Ideology, Politics, Social Psychology | 3 Comments »

The Facial Obviousness of Lying

Posted by The Situationist Staff on November 11, 2007

Mark Frank, a professor of social psychology at the University of Buffalo, studies the connection between facial expression and honesty. Frank has identified specific patterns in the tics, furrows, smirks, frowns and displacement actions of the facial muscles when one is speaking and connected those patterns with the speaker’s truthfulness. His research has attracted the attention of law enforcement officials and is now the subject of an NPR article by Dina Temple-Raston. We excerpt portions of it below.

* * *

Frank teaches judges, FBI agents and interrogators, among others, to recognize and accurately read the tiny cues from facial muscles that can happen in the blink of an eye. Frank calls them “hot spots” — emotional cues that might be linked to deceit, or might be clues for further interrogation.

Frank is good at seeing those cues normal people might miss. Eyebrow movement, for example, can be a dead giveaway. Frank says his research has shown that when eyebrows are pulled up and together, they express fear. A muscle in the lower part of the face — something you feel when you stretch your mouth back — is also a hot spot.

“You see that in photos, like when a pickup truck is starting to overturn,” Frank said. “You see fear expression in the driver’s face.”

Paul Moskal, a supervisory special agent with the FBI’s office in Buffalo, went throughmilli-vanilli.jpg Frank’s microexpression training. He said it has made him a better investigator and a better listener.

“We all have a gut feeling that we know when people are lying, but it is very hard for us to articulate why,” Moskal said. “I think it is putting science to what we think is intuitive, and for me the interest is where they cross. It makes you aware of things you weren’t aware of before. ”

A Law Enforcement Tool

To a certain extent, Frank is codifying human intuition while he’s also debunking myths about how to read people.

“The literature shows that liars don’t make less eye contact than truth tellers. But you ask anyone on the planet what liars do, the first thing they agree on is liars don’t look you in the eye,” Frank said. “Even just getting over that mythology is a step in the right direction.”

* * *

To read the rest of the article, click here.

Posted in Emotions, Law | 1 Comment »

The Situation of Reason

Posted by The Situationist Staff on November 9, 2007

In the mid-1970s, Situationist contributor Timothy Wilson with Richard Nisbett conducted one of the best known social psychology experiments of all time. It was strikingly simple and involved asking subjects to assess the quality of hosiery. Situationist contributors Jon Hanson and David Yosifon have described thenylons.jpg experiment this way:

Subjects were asked in a bargain store to judge which one of four nylon stocking pantyhose was the best quality. The subjects were not told that the stockings were in fact identical. Wilson and Nisbett presented the stockings to the subjects hanging on racks spaced equal distances apart. As situation would have it, the position of the stockings had a significant effect on the subjects’ quality judgments. In particular, moving from left to right, 12% of the subjects judged the first stockings as being the best quality, 17% of the subjects chose the second pair of stockings, 31% of the subjects chose the third pair of stockings, and 40% of the subjects chose the fourth—the most recently viewed pair of stockings. When asked about their respective judgments, most of the subjects attributed their decision to the knit, weave, sheerness, elasticity, or workmanship of the stockings that they chose to be of the best quality. Dispositional qualities of the stocking, if you will. Subjects provided a total of eighty different reasons for their choices. Not one, however, mentioned the position of the stockings, or the relative recency with which the pairs were viewed. None, that is, saw the situation. In fact, when asked whether the position of the stockings could have influenced their judgments, only one subject admitted that position could have been influential. Thus, Wilson and Nisbett conclude that “[w]hat matters . . . is not why the [position] effect occurs but that it occurs and that subjects do not report it or recognize it when it is pointed out to them.”

One of the core messages of more recent brain research is that most mental activity happens in the automatic or unconscious region of the brain. The unconscious mind is the platform for a set of mental activities that the brain has relegated beyond awareness for efficiency’s sake, so the conscious mind can focus on other things. In his book, “Strangers to Ourselves,” Timothy Wilson notes that the brain can absorb about 11 million pieces of information a second, of which it can process about 40 consciously. The unconscious brain handles the rest.

The automatic mind generally takes care of things like muscle control. But it also does more ethereal things. It recognizes patterns and construes situations, searching for danger, opportunities or the unexpected. It also shoves certain memories, thoughts, anxieties and emotions up into consciousness.

A lot has been learned about how what we think we know about what moves us is wrong. And much has also been learned about how what we don’t know we know can influence us. Psychologist Susan Courtney has an absolutely terrific article in Scientific American titled “Not So Deliberate: The decisive power of what you don’t know you know.” We excerpt portions of her article below.

* * *

When we choose between two courses of action, are we aware of all the things that influence that decision? Particularly when deliberation leads us to take a less familiar or more difficult course, scientists often refer to a decision as an act of “cognitive control.” Such calculated decisions were once assumed to be influenced only by consciously perceived information, especially when the decision involved preparation for some action. But a //www.timrylands.com/blog/2007/02/recent paper by Hakwan Lau and Richard Passingham, “Unconscious Activation of the Cognitive Control System in the Human Prefrontal Cortex,” demonstrates that the influences we are not aware of can hold greater sway than those we can consciously reject.

Biased competition

We make countless “decisions” each day without conscious deliberation. For example, when we gaze at an unfamiliar scene, we cannot take in all the information at once. Objects in the scene compete for our attention. If we’re looking around with no particular goal in mind, we tend to focus on the objects most visually different from their surrounding background (for example, a bright bird against a dark backdrop) or those that experience or evolution have taught us are the most important, such as sudden movement or facial features — particularly threatening or fearful expressions. If we do have a goal, then our attention will be drawn to objects related to it, such as when we attend to anything red or striped in a “Where’s Waldo” picture. Stimulus-driven and goal-driven influences alike, then, bias the outcome of the competition for our attention among a scene’s many aspects.

The idea of such biased competition (a term coined in 1995 by Robert Desimone and John Duncan, also applies to situations in which we decide among many possible actions, thoughts or plans. What might create an unconscious bias affecting these types of competition?

For starters, previous experience in a situation can make some neural connections stronger than others, tipping the scales in favor of a previously performed action. The best-known examples of this kind of bias are habitual actions (as examined in a seminal 1995 article by Salmon and Butters and what is known as priming.

Habitual actions are what they sound like — driving your kids to school, you turn right on Elm because that’s how you get there every day. No conscious decision is involved. In fact, it takes considerable effort to remember to instead turn left if your goal is to go somewhere else.

Priming works a bit differently; it’s less a well-worn route than a prior suggestion that steers you a certain way. If I ask you today to tell me the first word that comes to mind that starts with the letters mot and you answer mother, you’ll probably answer the same way if I ask you the same thing again four months from now, even if you have no explicit recollection of my asking the question. The prior experience primes you to repeat your performance. Other potentially unconscious influences are generally emotional or motivational.

Of course, consciously processed information can override these emotional and experience-driven biases if we devote enough time and attention to the decision. Preparing to perform a cognitive action (“task set”) has traditionally been considered a deliberate act of control and part of this reflective, evaluative neural system. (See, for example, the 2002 review by Rees, Kreiman and Koch — pdf download.) As such, it was thought that task-set preparation was largely immune to subconscious influences.

We generally accept it as okay that some of our actions and emotional or motivational states are influenced by neural processes that happen without our awareness. For example, it aids my survival if subliminally processed stimuli increase my state of vigilance — if, for example, I jump out of the way before I am consciously aware that the thing at my feet is a snake. But we tend to think of more conscious decisions differently. If I have time to recognize an instruction, remember what that means I’m supposed to do and prepare to make a particular kind of judgment on the next thing I see, then the assumption is that this preparation must be based entirely on what I think I saw — not what I wasn’t even aware of.

Yet Lau and Passingham have found precisely the opposite in their study — that information we’re not aware of can more strongly influence even the most deliberative, non-emotional sort of decision even more than does information we are aware of.

Confusing cues

Lau and Passingham had their subjects perform one of two tasks: when shown a word on a screen, the subjects had to decide either a) whether or not the word referred to a concrete object or b) whether or not the word had two syllables. A cue given just before each word — the appearance of either a square of a diamond —lau06-fig1.jpg indicated whether to perform the concrete judgment task or the syllables task. These instruction cues were in turn preceded by smaller squares or diamonds that the subjects were told were irrelevant. A variation in timing between the first and second cues determined whether the participants were aware of seeing both cues or only the second.

As you would expect, the task was more difficult when the cues were the not same — that is, when a diamond preceded a square or a square a diamond. The surprising finding was that this confusion effect was greater when the timing between the cues was so close that the participants didn’t consciously notice the first cue. When the cues were mixed but the subjects were consciously aware of only the second instruction, their responses — and their brain activity as measured by functional magnetic resonance imaging (fMRI) — indicated that the “invisible” conflicting cue had made them more likely to prepare to do the “wrong” task. Although similar effects have been shown on tasks that involved making a decision about the appearance of the image immediately following the “invisible” image, this is the first time this effect has been demonstrated for complex task preparation.

It may not be surprising that we juggle multiple influences when we make decisions, including many of which we are not aware — particularly when the decisions involve emotional issues. Lau and Passingham, however, show us that even seemingly rational, straightforward, conscious decisions about arbitrary matters can easily be biased by inputs coming in below our radar of awareness. Although it wasn’t directly tested in this study, the results suggest that being aware of a misleading cue may allow us to inhibit its influence. And the study makes clear that influences we are not aware of (including, but not limited to, those brought in by experience and emotion) can sneak into our decisions unchecked.

* * *

Susan Courtney is an associate professor of psychology at Johns Hopkins University, where she runs the Courtney Lab of Cognitive Neuroscience and Working Memory.

Posted in Choice Myth | 2 Comments »

Blind to our Situational Blindness

Posted by The Situationist Staff on November 8, 2007

blind-spots.jpgLast month Bob Lane wrote a review of Madeleine Van Hecke‘s interesting new book, Blind Spots: Why Smart People Do Dumb Things (2007). (Bob Lane is a retired professor of English and Philosophy who is currently an Honourary Research Associate in Philosophy at Malaspina University-College in British Columbia, Canada.) We excerpt portions of his review below.

* * *

Near the end of her book Van Hecke relates an approach used by a high school history teacher to get his students to think about the complex notion of causality in history. Instead of merely listing the causes of World War II for them to write down in their notes he would start by telling them to think about a man driving home on a rainy night after a hard day’s work. The man’s car spun out and ended up plowing into a tree. Then the teacher would ask the class “How many different possible causes for this accident can you come up with?” The students would come up with many suggestions. Tired from working and dozed off. Slippery road. Worn out tires. Faulty parts. Then the teacher would ask them, “Do you think the causes of World War II would be more or less complicated than the causes of this man’s automobile accident?” (p. 206)

Van Hecke develops that anecdote into a full discussion of how to overcome the blind spot of hidden causes. From it comes a list of several general questions that we can ask when trying to determine the causes of an event; questions that force us to think about fundamental causes, contributing causes,looking-into-the-blind-spot.jpg hidden causes, flukes and so on. I mention this as an example of the formula she uses in each chapter throughout the book: an anecdote or illustration of a real life situation followed by an analysis of that situation to tease out blind spots that often lead us astray. Finally she provides a set of tactics to help us identify our blind spots by probing more deeply into the situation. And then we are given a summary of what we should have learned in the chapter and a “sneak preview” of the material in the next chapter of the book.

Eleven chapters plus a preface and an afterword comprise what Michael Shermer calls this “delightful romp through the maze of human fallibility” providing the reader with a thoughtful, insightful and often humorous look at the human condition and our propensity for blind spots. Ten blind spots are presented, by definition and example, and then analyzed with a psychological discussion of why we are “blind” in that area and a practical discussion of how we might better become aware of our blind spots and work to correct them.

Chapter Seven, for example, provides a good discussion of blind spot #6: trapped by categories. First an example from Ellen Langer‘s book Mindfulness provides a way into the discussion. “Imagine that a wealthy man who is part of a scavenger hunt rings your doorbell in the middle of the night. He asks if you have the final item on his list, a piece of wood that measures about three-by-seven feet.” He says he will pay you $10,000 for it. You think what could I find that would meet those criteria? Most of us wouldn’t realize that we were standing right next to the needed item: the door. Too often we are “trapped by categories” and so fail to see new and unique ways of using things. As Van Hecke argues “classification flattens our perception of individuals” as well as of things and can lead to a diminished understanding and appreciation of the world around us.

These sorts of lessons come in short chapters which are easy to read, full of humor, rich with suggestions for further reading, and well documented. . . . One of its lessons is that we all, even smart people, have blind spots.

* * *

I recommend the book as a useful text in critical thinking classes, and as a “good read” for the general reader interested in improving his/her critical acumen in the world of mass media, sound bites, and political hectoring. . . .

* * *

To read Lane’s entire review on Metapsychology, click here. For other recent Situationist posts discussing blind spots, go to “I’m Objective, You’re Biased,” and “Mistakes Were Made (but not by me).” For a previous post discussing research by Ellen Langer, go to “January Fool’s Day.”

Posted in Life, Social Psychology | Leave a Comment »

Deep Capture – Part II

Posted by J on November 6, 2007

//sketchbook.dangermarc.com/

This is the second of a multi-part series on what Situationist Contributor David Yosifon and I call “deep capture.” This post, like Part I, is drawn from our 2003 article, “The Situation” (downloadable here).

The most basic argument behind the prediction of deep capture is that if people are moved by internal and external situation (particularly while believing themselves to be moved primarily by disposition), then, to move them, there will be a hard-to-see or hard-to-take-seriously competition over the situation.

Part I of this series explained that our “deep capture” story is very much analogous to the (shallow) capture story told by economists (such as Nobel laureate George Stigler) and public choice theorists for decades regarding the competition over prototypical regulatory institutions. This post looks to history for another analogy to the process that we claim is widespread today — the deep capture of how we understand ourselves.

(Situationist artist Marc Scheff is providing the remarkable images at the top of each post in this series.)

* * *

I, Galileo [Galilei], . . . seventy years of age, arraigned personally for judgment, kneeling before you Most Eminent and Most Reverend Cardinals Inquisitors-General against heretical depravity in all of Christendom, . . . swear that I have always believed, I believe now, and with God’s help I will believe in the future all that the Holy Catholic and Apostolic Church holds, preaches, and teaches . . . . I have been judged vehemently suspected of heresy, namely of having held and believed that the [S]un is the center of the world and motionless and the [E]arth is not the center and moves.

Therefore, desiring to remove from the minds of Your Eminences and every faithful Christian this vehement suspicion, rightly conceived against me, with a sincere heart and unfeigned faith I abjure, curse, and detest the above-mentioned errors and heresies, . . . and I swear that in the future I will never again say or assert, orally or in writing, anything which might cause a similar suspicion about me . . . .

Galileo Galilei

With the foundation of shallow capture in place, we can now build upon it, or dig beneath it, to introduce deep capture. To catch your first glimpse of the phenomenon, recall the Galileo story. . . . Galileo . . . was committed to realism . . . [, and his critics, including Cardinal Bellarmine were], like legal economists, . . . wed to an unrealistic, reductionist model.

Let us push the analogy further. Galileo was, for most of his life, devoted to the idea that humans could, through methods of observation, discover and make sense of the natural order. He was committed to basing theories about our world and the place of it in the universe on all the evidence and clues available for human inspection, even if doing so challenged widely held self-affirming and faith-based beliefs about the Earth’s centrality in the universe. Recall that Galileo lived at a time when most people believed themselves to inhabit a stationary world. The intellectual establishment of the Renaissance, controlled to a large degree by the Catholic Church, perceived human knowledge as a fundamentally static thing. Certain environmental features seemed obvious: the Earth was not moving, and the Sun was rotating about the earth. The validity of those notions was bolstered by everyday experience and found confirmation in several biblical texts, and in the basic assumption that heaven reigned above the Earth and hell below.

Galileo, informed by the work of fellow astronomical realist Copernicus, was interested in exploring and studying elements of our planet and the celestial bodies whirling “above” it for hard-to-see clues into the reality of celestial dynamics. Mathematics and a telescope both provided critical lenses through which he could get a better view.

Using these tools, Galileo helped to turn the dominant Aristotelian model of the universe, and our place in it, on its head. It is important to note, however, that the Aristotelian model (as enhanced through Ptolemy‘s refinements) provided an adequate “as if” theory, for most purposes. Through theory and observation, Galileo removed the Earth from its stable center, around which the Sun was revolving, and placed the Sun at the immovable center of the Earth’s rotations. Put differently, by studying our astronomical situation more closely, Galileo discovered our astronomical fundamental attribution error: attributing the movement of the celestial situation to the centrality and fixity of the Earth instead of attributing our own movement, like that of the other heavenly bodies, to the celestial situation. Galileo did not provide absolute proof for his challenging worldview, although he believed the telescopic observations were sufficient to overturn the geocentric model. What he did provide was a refined theory and new observations–such as the discovery of four moons orbiting Jupiter, the phases of Venus, and an exegesis of the tides–that strongly suggested that the astronomical situation was far more influential than the then-dominant geocentric view allowed.

ptolemiac-system.jpg

We want to push this analogy even further. Despite Galileo’s compelling evidence that the Earth revolved around the sun, he appeared to have been wrong. To be sure, we might look today and judge that he was (comparatively) right, after all. But forget for a moment the revival and celebration of Galileo’s pre-abjuration views, beginning in the eighteenth century, and temporarily ignore his stature today as a father of modern science. Instead, imagine yourself living in early seventeenth-century Italy. It is [Cardinal] Bellarmine‘s view–informed by biblical passages, religious authorities, popular perceptions, experience, and naked-eye observations–which confirms your intuitions and the formal positions of the most powerful groups and institutions in Italy. And it is Galileo, not Bellarmine, who recants and renounces his earlier “findings” and opinions. Chances are that you, that we, would have believed Galileo was a heretic and never doubted the process that “proved” him to be one. From this perspective, Bellarmine was obviously right, and Galileo, clearly wrong.

So how could one of the greatest scientists of all time be so wrong? The answer is obvious, indeed it is one of the reasons that the story is so well known: the scientific community was not sufficiently insulated from powerful institutions with a stake in scientific outcomes. More concretely, because Galileo’s work was threatening to the Catholic Church and its teachings, and because of the Church’s encompassing power, Galileo was under intense pressure–indeed, was ultimately convicted by the inquisitors–to “restate” his views on the structure of the universe. Galileo’s recantation was the result, not of scientific observation, but of religious persecution and the very real threat of a horrible death. The situational forces behind Galileo’s “restated” views are thus unmistakable. Galileo made his recantation decision with the equivalent of a gun to his head. Of course, as we have argued throughout this Article, such situational pressures are rarely so obvious.

This can all be expressed, somewhat stylistically, in Stiglerian terms. In recanting, Galileo was “captured” by the Church much like, say, the now defunct Civil Aeronautics Board was once said to be captured by the airline industry. He claimed to be saying what he believed “with sincere heart and unfeigned faith,” independent of any pressure from the Church, when in fact he was serving the Church’s interests, despite his own beliefs.

* * *

Part III of this series begins providing evidence of deep capture today in the United States.  To read Part III, click here.

Posted in Choice Myth, Deep Capture, History, Legal Theory | 10 Comments »

Being Smart About “Dumb Blonde” Jokes

Posted by The Situationist Staff on November 5, 2007

Dumb Blonde JokeJon Hanson recently examined the overlooked effects of sexualized stereotypes in televised advertisements about women, including ads characterized as quasi-public service announcements.

We now bring news of a new study by Thomas E. Ford of Western Carolina University which finds that jokes about blonde’s intelligence and women drivers lead to hostile feelings and discrimination against women. The study will be published in the Personality and Social Psychology Bulletin. Below is an excerpt from a Newswise summary of the study.

* * *

A research project led by a Western Carolina University psychology professor indicates that jokes about blondes and women drivers are not just harmless fun and games; instead, exposure to sexist humor can lead to toleration of hostile feelings and discrimination against women.

“Sexist humor is not simply benign amusement. It can affect men’s perceptions of their immediate social surroundings and allow them to feel comfortable with behavioral expressions of sexism without the fear of disapproval of their peers,” said Thomas E. Ford, a new faculty member in the psychology department at WCU. “Specifically, we propose that sexist humor acts as a ‘releaser’ of prejudice.”

Ford, who conducted research into sexist humor with three graduate students at his previous institution of Western Michigan University, presents their findings in an article accepted for publication in Personality and Social Psychology Bulletin, one of the nation’s top social psychology journals. The article, “More Than Just a Joke: The Prejudice-Releasing Function of Sexist Humor,” is scheduled for publication in February.

In the article, Ford and the graduate student co-authors describe two research projects designed to test the theory that “disparagement humor” has negative social consequences and plays an important role in shaping social interaction.

“Our research demonstrates that exposure to sexist humor can create conditions that allow men – especially those who have antagonistic attitudes toward women – to express those attitudes in their behavior,” he said. “The acceptance of sexist humor leads men to believe that sexist behavior falls within the bounds of social acceptability.”Sexual Harassment

In one experiment, Ford and his student colleagues asked male participants to imagine that they were members of a work group in an organization. In that context, they either read sexist jokes, comparable non-humorous sexist statements, or neutral (non-sexist) jokes. They were then asked to report how much money they would be willing to donate to help a women’s organization. “We found that men with a high level of sexism were less likely to donate to the women’s organization after reading sexist jokes, but not after reading either sexist statements or neutral jokes,” Ford said.

In the second experiment, researchers showed a selection of video clips of sexist or non-sexist comedy skits to a group of male participants. In the sexist humor setting, four of the clips contained humor depicting women in stereotypical or demeaning roles, while the fifth clip was neutral. The men were then asked to participate in a project designed to determine how funding cuts should be allocated among select student organizations.

“We found that, upon exposure to sexist humor, men higher in sexism discriminated against women by allocating larger funding cuts to a women’s organization than they did to other organizations,” Ford said. “We also found that, in the presence of sexist humor, participants believed the other participants would approve of the funding cuts to women’s organizations. We believe this shows that humorous disparagement creates the perception of a shared standard of tolerance of discrimination that may guide behavior when people believe others feel the same way.”

The research indicates that people should be aware of the prevalence of disparaging humor in popular culture, and that the guise of benign amusement or “it’s just a joke” gives it the potential to be a powerful and widespread force that can legitimize prejudice in our society, he said.

* * *

For a previous Situationist post on some of the situational causes and effects of humor, see “Situation Comedy.”

Posted in Life, Public Policy | 3 Comments »

 
%d bloggers like this: