The Situationist

Archive for the ‘Situationist Contributors’ Category

Jon Hanson & Jacob Lipton Respond To Randall Kennedy’s “Black Tape” Op-ed

Posted by The Situationist Staff on December 1, 2015

Randall KennedyFrom the Justice Blog:

In the spirit of fostering a community-wide conversation, we wanted to respond to Randall Kennedy’s provocative op-ed.

Although Randy is unperturbed by the black tape recently placed over his photograph, he is quite concerned about something else: the potentially destructive effects of taking the outrage and demands of some students at Harvard Law School – and at universities around the country – too seriously.

These students perceive racism not only on the walls of Harvard Law School but also in its history, culture, curriculum, and personnel. Having asked some of those students to explain “with as much particularity as possible” the sources of their discontent, Randy is largely unconvinced. Some of their “complaints” may have “a ring of validity,” but others “are dubious.” True, their “accusations warrant close examination and may well justify further reforms,” but his primary concern is with the intensity and unintended consequences of their grievances. On the pages of The New York Times, he cautions those youngsters to avoid “exaggerat[ing] the scope of the racism” or “minimizing their own strength and the victories that they and their forebears have already achieved.”

Read their full essay here.

Related Situationist posts:

Posted in Education, Ideology, Implicit Associations, Law, Legal Theory, Situationist Contributors | Leave a Comment »

Supreme Court Acknowledges “Unconscious Prejudice.”

Posted by The Situationist Staff on June 26, 2015

From Slate, by Kenji Yoshino:

Thursday’s blockbuster opinion in the Texas Department of Housing and Community Affairs v. Inclusive Communities Project case will be primarily and justly remembered for interpreting the Fair Housing Act to include a disparate-impact cause of action. In anti-discrimination law, “disparate treatment” requires an intent to discriminate, while “disparate impact” can allow a plaintiff to win even in the absence of discriminatory intent. For instance, if an entity has a policy that disproportionately affects a protected group, it has to justify that disparity even in the absence of any allegation of discriminatory intent. If it cannot produce such a justification, it will lose. As many progressives have already noted, this interpretation of the FHA is a big win, as discriminatory intent is often difficult to prove.

While less obvious, however, there is a passage in the FHA case that can also be counted as a potential win for progressives. On Page 17 of the slip opinion, Justice Anthony Kennedy writes, “Recognition of disparate-impact liability under the FHA also plays a role in uncovering discriminatory intent: It permits plaintiffs to counteract the unconscious prejudices and disguised animus that escape easy classification as disparate treatment.” (Emphasis mine.) Disparate impact has long been seen as a way of proving “disguised animus”—so that is nothing new. However, the idea that disparate impact can be used to get at “unconscious prejudices” is, to my knowledge, an idea new to a Supreme Court majority opinion.

The idea of “unconscious prejudice” is that one can have prejudices of which one is unaware that nonetheless drive one’s actions. It has been kicking around in academia for years. As Mahzarin Banaji and Anthony Greenwald discuss in Blindspot: Hidden Biases of Good People, Greenwald created the test to assess such unconscious biases in 1994. This test can now be found at implicit.harvard.edu. Since taking academia by storm, it has migrated over to industry—companies ranging from Google to Pfizer have laudably adopted it to assist in making their workplaces more inclusive.

Read the entire article, including portion where Professor Yoshino discusses potential implications of the Kennedy’s acknowledgment, here.

Posted in Implicit Associations, Law, Situationist Contributors, Social Psychology | Leave a Comment »

Morality and Politics: A System Justification Perspective

Posted by The Situationist Staff on March 5, 2015

capital buildingAn Interview with John Jost by Paul Rosenberg

Note: This interview was originally published on Salon.com with an outrageously incendiary title that entirely misrepresented its content.

Introduction by Paul Rosenberg:

In the immediate aftermath of World War II, a wide range of thinkers, both secular and religious, struggled to make sense of the profound evil of war, particularly Nazi Germany and the Holocaust. One such effort, “The Authoritarian Personality” by Theodore Adorno and three co-authors, opened up a whole new field of political psychology—initially a small niche within the broader field of social psychology—which developed fitfully over the years, but became an increasingly robust subject area in 1980s and 90s, fleshing out a number of distinct areas of cognitive processing in which liberals and conservatives differed from one another. Liberal/conservative differences were not the sole concern of this field, but they did appear repeatedly across a growing range of different sorts of measures, including the inclination to justify the existing social order, whatever it might be, an insight developed by John Jost, starting in the 1990s, under the rubric of “system justification theory.”

The field of political psychology gained increased visibility in the 2000s as conservative Republicans controlled the White House and Congress simultaneously for the first time since the Great Depression, and took the nation in an increasingly divisive direction. Most notably, John Dean’s 2006 bestseller, “Conservatives Without Conscience,” popularized two of the more striking developments of the 1980s and 90s, the constructs of rightwing authoritarianism and social dominance orientation. A few years before that, a purely academic paper, “Political Conservatism as Motivated Social Cognition,” by Jost and three other prominent researchers in the field, caused a brief spasm of political reaction which led some in Congress to talk of defunding the entire field.

But as the Bush era ended, and Barack Obama’s rhetoric of transcending right/left differences captured the national imagination, an echo of sentiment appeared in the field of political psychology as well. Known as “moral foundations theory,” and most closely associated with psychologist Jonathan Haidt, and popularized in his book “The Righteous Mind,” it argued that a too-narrow focus on concerns of fairness and care/harm avoidance had diminished researchers’ appreciation for the full range of moral concerns, especially a particular subset of distinct concerns which conservatives appear to value more than liberals do. In order to restore balance to the field, researchers must broaden their horizons—and even, Haidt argued, engage in affirmative action to recruit conservatives into the field of political psychology. This was, in effect, an argument invoking liberal values—fairness, inclusion, openness to new ideas, etc.—and using them to criticize or even attack what was characterized as a liberal orthodoxy, or even a church-like, close-minded tribal moral community.

Yet, to some, these arguments seemed to gloss over, or even just outright dismiss a wide body of data, not dogma, from decades of previous research. While people were willing to consider new information, and new perspectives, there was a reluctance to throw out the baby with the bathwater, as it were. In the most nitty-gritty sense, the question came down to this: Was the rhetorical framing of the moral foundations argument actually congruent with the detailed empirical findings in the field? Or did it serve more to blur important distinctions that were solidly grounded in rigorous observation?

Recently, a number of studies have raised questions about moral foundations theory in precisely these terms—are the moral foundations more congenial to conservatives actually reflective of non-moral or even immoral tendencies which have already been extensively studied? Late last year, a paper co-authored by Jost—“Another Look At Moral Foundations Theory”—built on these earlier studies to make the strongest case yet along these lines. To gain a better understanding of the field as a whole, moral foundations theory as a challenge within it, the problems that theory is now confronting, and what sort of resolution—and new frontiers—may lie ahead for the field, Paul Rosenberg spoke with John Jost. In the end, he suggested, moral foundations theory and system justification theory may end up looking surprisingly similar to one another, rather than being radically at odds.

PR: You’re most known for your work developing system justification theory, followed by your broader work on developing an integrated account of political ideology. You recently co-authored a paper “Another Look at Moral Foundations Theory,” which I want to focus on, but in order to do so coherently, I thought it best to begin by first asking you about your own work, and that of others you’ve helped integrate, before turning to moral foundations theory generally, and this critical paper in particular.

So, with that in mind as a game plan, could you briefly explain what system justification theory is all about, how it was that you became interested in the subject matter, and why others should be interested in it as well?

JJ: When I was a graduate student in social psychology at Yale back in the 1990’s I began to wonder about a set of seemingly unrelated phenomena that were all counterintuitive in some way and in need of explanation. So I asked: Why do people stay in abusive relationships, why do women feel that they are entitled to lower salaries than men, and why do African American children come to think that white dolls are more attractive and desirable? Why do people blame victims of injustice and why do victims of injustice sometimes blame themselves? Why is it so difficult for unions and other organizations to get people to stand up for themselves, and why do we find personal and social change to be so difficult, even painful? Of course, not everyone exhibits these patterns of behavior at all times, but many people do, and it seemed to me that these phenomena were not well explained by existing theories in social science.

And so it occurred to me that there might be a common denominator—at the level of social psychology—in these seemingly disparate situations. Perhaps human beings are in some fairly subtle way prone to accept, defend, justify, and rationalize existing social arrangements and to resist attempts to change the status quo, however well-meaning those attempts may be. In other words, we may be motivated, to varying degrees, to justify the social systems on which we depend, to see them as relatively good, fair, legitimate, desirable, and so on.

This did not strike me as implausible, given that social psychologists had already demonstrated that we are often motivated to defend and justify ourselves and the social groups to which we belong. Most of us believe that we are better drivers than the average person and more fair, too, and many of us believe that our schools or sports teams or companies are better than their rivals and competitors. Why should we not also want to believe that the social, economic, and political institutions that are familiar to us are, all things considered, better than the alternatives? To believe otherwise is at least somewhat painful, insofar it would force us to confront the possibility that our lives and those of others around us may be subject to capriciousness, exploitation, discrimination, injustice, and that things could be different, better—but they are not.

In 2003, a paper you co-authored, “Political Conservatism as Motivated Social Cognition” caused quite a stir politically—there were even brief rumblings in Congress to cut off all research funding, not just for you, but for an entire broad field of research, though you managed to quell those rumblings in a subsequent Washington Post op-ed. That paper might well be called the tip of the iceberg of a whole body of work you’ve helped draw together, and continued to work on since then. So, first of all, what was that paper about?

We wanted to understand the relationship, if any, between psychological conservatism—the mental forces that contribute to resistance to change—and political conservatism as an ideology or a social movement. My colleagues and I conducted a quantitative, meta-analytic review of nearly fifty years of research conducted in 12 different countries and involving over 22,000 research participants or individual cases. We found 88 studies that had investigated correlations between personality characteristics and various psychological needs, motives, and tendencies, on one hand, and political attitudes and opinions, on the other.

And what did it show?

We found pretty clear and consistent correlations between psychological motives to reduce and manage uncertainty and threat—as measured with standard psychometric scales used to gauge personal needs for order, structure, and closure, intolerance of ambiguity, cognitive simplicity vs. complexity, death anxiety, perceptions of a dangerous world, etc.—and identification with and endorsement of politically conservative (vs. liberal) opinions, leaders, parties, and policies.

How did politicians misunderstand the paper, and how did you respond?

I suspect that there were some honest misunderstandings as well as some other kinds. One issue is that many people seem to assume that whatever psychologists are studying must be considered (by the researchers, at least) as abnormal or pathological. But that is simply untrue. Social, cognitive, developmental, personality, and political psychologists are all far more likely to study attitudes and behaviors that are normal, ordinary, and mundane. We are primarily interested in understanding the dynamics of everyday life. In any case, none of the variables that my colleagues and I investigated had anything to do with psychopathology; we were looking at variability in normal ranges within the population and whether specific psychological characteristics were correlated with political opinions. We tried to point some of these things out, encouraging people to read beyond the title, and emphasizing that there are advantages as well as disadvantages to being high vs. low on the need for cognitive closure, cognitive complexity, sensitivity to threat, and so on.

How has that paper been built on since?

I am gratified and amazed at how many research teams all over the world have taken our ideas and refined, extended, and otherwise built upon them over the last decade. To begin with, a number of studies have confirmed that political conservatism and right-wing orientation are associated with various measures of system justification. And public opinion research involving nationally representative samples from all over the world establishes that the two core value dimensions that we proposed to separate the right from the left—traditionalism (or resistance to change) and acceptance of inequality—are indeed correlated with one another, and they are generally (but not always) associated with system justification, conservatism, and right-wing orientation.

Since 2003, numerous studies have replicated the correlations we observed between epistemic motives, including personal needs for order, structure, and closure and resistance to change, acceptance of inequality, system justification, conservatism, and right-wing orientation. Several find that liberals score higher than conservatives on the need for cognition, which captures the individual’s chronic tendency to enjoy effortful forms of thinking. This finding is potentially important because individuals who score lower on the need for cognition favor quick, intuitive, heuristic processing of new information, whereas those who score higher are more likely to engage in more elaborate, systematic processing (what Daniel Kahneman refers to as System 1 and System 2 thinking, respectively). The relationship between epistemic motivation and political orientation has also been explored in research on nonverbal behavior and neurocognitive structure and functioning.

Various labs have also replicated the correlations we observed between existential motives, including attention and sensitivity to dangerous and threatening stimuli, and resistance to change, acceptance of inequality, and conservatism. Ingenious experiments have demonstrated that temporary activation of epistemic needs to reduce uncertainty or to attain a sense of control or closure increases the appeal of system justification, conservatism, and right-wing orientation. Experiments have demonstrated that temporary activation of existential needs to manage threat and anxiety likewise increases the appeal of system justification, conservatism, and right-wing orientation, all other things being equal. These experiments are especially valuable because they identify causal relationships between psychological motives and political orientation.

Progress has also been made in understanding connections between personality characteristics and political orientation. In terms of “Big Five” personality traits, studies involving students and nationally representative samples of adults tell exactly the same story: Openness to new experiences is positively associated with a liberal orientation, whereas Conscientiousness (especially the need for order) is positively associated with conservative orientation. In a few longitudinal studies, childhood measures of intolerance of ambiguity, uncertainty, and complexity as well as sensitivity to fear, threat, and danger have been found to predict conservative orientation later in life. Finally, we have observed that throughout North America and Western Europe, conservatives report being happier and more satisfied than liberals, and this difference is partially (but not completely) explained by system justification and the acceptance of inequality as legitimate. As we suspected many years ago, there appears to be an emotional or hedonic cost to seeing the system as unjust and in need of significant change.

“Moral foundations theory” has gotten a lot of popular press, as well as serious attention in the research community, but for those not familiar with it, could you give us a brief description, and then say something about why it is problematic on its face (particularly in light of the research discussed above)?

The basic idea is that there are five or six innate (evolutionarily prepared) bases for human “moral” judgment and behavior, namely fairness (which moral foundations theorists understand largely in terms of reciprocity), avoidance of harm, ingroup loyalty, obedience to authority, and the enforcement of purity standards. My main problem is that sometimes moral foundations theorists write descriptively as if these are purely subjective considerations—that people think and act as if morality requires us to obey authority, be loyal to the group, and so on. I have no problem with that descriptive claim—although this is surely only a small subset of the things that people might think are morally relevant—as long as we acknowledge that people could be wrong when they think and act as if these are inherently moral considerations.

At other times, however, moral foundations theorists write prescriptively, as if these “foundations” should be given equal weight, objectively speaking, that all of them should be considered virtues, and that anyone who rejects any of them is ignoring an important part of what it means to be a moral human being. I and others have pointed out that many of the worst atrocities in human history have been committed not merely in the name of group loyalty, obedience to authority, and the enforcement of purity standards, but because of a faithful application of these principles. For 24 centuries, Western philosophers have concluded that treating people fairly and minimizing harm should, when it comes to morality, trump group loyalty, deference to authority, and purification. In many cases, behaving ethically requires impartiality and disobedience and the overcoming of gut-level reactions that may lead us toward nepotism, deference, and acting on the basis of disgust and other emotional intuitions. It may be difficult to overcome these things, but isn’t this what morality requires of us?

There have been a number of initial critical studies published, which you cite in this new paper. What have they shown?

Part of the problem is that moral foundations theorists framed their work, for rhetorical purposes, in strong contrast to other research in social and political psychology, including work that I’ve been associated with. But this was unnecessary from the start and, in retrospect, entirely misleading. They basically said: “Past work suggests that conservatism is motivated by psychological needs to reduce uncertainty and threat and that it is associated with authoritarianism and social dominance, but we say that it is motivated by genuinely moral—not immoral or amoral—concerns for group loyalty, obedience to authority, and purity.” This has turned out to be a false juxtaposition on many levels.

First researchers in England and the Netherlands demonstrated that threat sensitivity is in fact associated with group loyalty, obedience to authority, and purity. For instance, perceptions of a dangerous world predict the endorsement of these three values, but not the endorsement of fairness or harm avoidance. Second, a few research teams in the U.S. and New Zealand discovered that authoritarianism and social dominance orientation were positively associated with the moral valuation of ingroup, authority, and purity but not with the valuation of fairness and avoidance of harm. Psychologically speaking, the three so-called “binding foundations” look quite different from the two more humanistic ones.

What haven’t these earlier studies tackled that you wanted to address? And why was this important?

These other studies suggested that there was a reasonably close connection between authoritarianism and the endorsement of ingroup, authority, and purity concerns, but they did not investigate the possibility that individual differences in authoritarianism and social dominance orientation could explain, in a statistical sense, why conservatives value ingroup, authority, and purity significantly more than liberals do and—just as important, but often glossed over in the literature on moral foundations theory—why liberals value fairness and the avoidance of harm significantly more than conservatives do.

How did you go about tackling these unanswered questions? What did you find and how did it compare with what you might have expected?

There was a graduate student named Matthew Kugler (who was then studying at Princeton) who attended a friendly debate about moral foundations theory that I participated in and, after hearing my remarks, decided to see whether the differences between liberals and conservatives in terms of moral intuitions would disappear after statistically adjusting for authoritarianism and social dominance orientation. He conducted a few studies and found that it did, and then he contacted me, and we ended up collaborating on this research, collecting additional data using newer measures developed by moral foundations theorists as well as measures of outgroup hostility.

What does it mean for moral foundations theory?

To me, it means that scholars may need to clean up some of the conceptual confusion in this area of moral psychology, and researchers need to face up to the fact that some moral intuitions (things that people may think are morally relevant and may use as a basis for judging others) may lead people to behave in an unethical, discriminatory manner. But we need behavioral research, such as studies of actual discrimination, to see if this is actually the case. So far the evidence is mainly circumstantial.

And what future research is to come along these lines from you?

One of my students decided to investigate the relationship between system justification and its motivational antecedents, on one hand, and the endorsement of moral foundations, on the other. This work also suggests that the rhetorical contrast between moral foundations theory and other research in social psychology was exaggerated. We are finding that, of the variables we have included, empathy is the best psychological predictor of endorsing fairness and the avoidance of harm as moral concerns, whereas the endorsement of group loyalty, obedience to authority, and purity concerns is indeed linked to epistemic motives to reduce uncertainty (such as the need for cognitive closure) and existential motives to reduce threat (such as death anxiety) and to system justification in the economic domain. So, at a descriptive level, moral foundations theory is entirely consistent with system justification theory.

Finally, I’ve only asked some selective questions, and I’d like to conclude by asking what I always ask in interviews like this—What’s the most important question that I didn’t ask? And what’s the answer to it?

Do I think that social science can help to address some of the problems we face as a society? Yes, I am holding out hope that it can, at least in the long run, and hoping that our leaders will come to realize this eventually.

Our conversation leads me to want to add one more question. Haidt’s basic argument could be characterized as a combination of anthropology–look at all the “moral principles” different cultures have advanced—and the broad equation of morality with the restraint of individual self-interest and/or desire. Your paper, bringing to attention the roles of SDO and RWA, throws into sharp relief a key problem with such a formulation—one that Southern elites have understood for centuries: wholly legitimate individual self-interest (and even morality—adequately feeding & providing a decent future for one’s children, for example) can be easily over-ridden by appeals to heinous “moral concerns,” such as “racial purity,” or more broadly, upholding the “God-given racial order.”

Yet, Haidt does seem to have an important point that individualist moral concern leave something unsaid about the value of the social dimension of human experience, which earlier moral traditions have addressed. Do you see any way forward toward developing a more nuanced account of morality that benefits from the criticism that harm-avoidance and fairness may be too narrow a foundation without embracing the sorts of problematic alternatives put forward so far?

Yes, and there is long tradition of theory and research on social justice—going all the way back to Aristotle—that involves a rich, complex, nuanced analysis of ethical dilemmas that goes well beyond the assumption that fairness is simply about positive and negative reciprocity.

Without question, we are a social species with relational needs and dependencies, and how we treat other people is fundamental to human life, especially when it comes to our capacity for cooperation and social organization. When we are not engaging in some form of rationalization, there are clearly recognizable standards of procedural justice, distributive justice, interactional justice, and so on. Even within the domain of distributive justice—which has to do with the allocation of benefits and burdens in society—there are distinct principles of equity, equality, and need, and in some situations these principles may be in conflict or contradiction.

How to reconcile or integrate these various principles in theory and practice is no simple matter, and this, it seems to me, is what we should focus on working out. We should also focus on solving other dilemmas, such as how to integrate utilitarian, deontological, virtue-theoretical, and social contractualist forms of moral reasoning, because each of these—in my view—has some legitimate claim on our attention as moral agents.

Related Situationist posts:

To review the full collection of Situationist posts related to system justification, click here.

Posted in Ideology, Morality, Situationist Contributors, Social Psychology, System Legitimacy | 3 Comments »

Systemic Justice Project in The Globe

Posted by J on February 7, 2015

Jon Hanson Jacob Lipton Systemic Justice Project Below are excerpts from Courtney Humphries’s superb Boston Globe article about the Systemic Justice Project at Harvard Law School (cartoon by Sam Washburn and photo by Justin Saglio, both for the Globe):

From the first day, it’s clear that law professor Jon Hanson’s new Systemic Justice class at Harvard Law School is going to be different from most classes at the school. Hanson, lanky, bespectacled, and affable, cracks jokes as he paces the room. He refers to the class of 50-odd students as a community; he even asks students to brainstorm a name for the group. But behind the informality is a serious purpose: Hanson is out to change the way law is taught.

“None of us really knows what ‘systemic justice’ is—yet you’re all here,” he points out. The new elective class, which is being taught for the first time in this spring term, will ask students to examine common causes of injustice in history and ways to use law and activism to even the field.

Traditionally, students come to law school to master existing laws and how to apply them. But surveys given to the students in this class beforehand show that most are worried about big unsolved social problems—income inequality, climate change, racial bias in policing—and believe that law is part of the problem. The goal of Hanson’s class is to introduce a new approach.

The class is part of a new Systemic Justice Project at Harvard, led by Hanson and recent law school graduate Jacob Lipton. They’re also leading a course called the Justice Lab, a kind of think tank that will ask students to analyze systemic problems in society and propose legal solutions. Both classes go beyond legal doctrine to show how history, psychology, and economics explain the causes of injustice. A conference in April will bring students and experts together to discuss their findings.

Harvard’s project is an unusual one, but it arises out of a growing recognition that law students need to be trained to be problem-solvers and policy makers. As Hanson tells his students that first day, “If you’re thinking about systemic justice, you need to be thinking about legal education.” He believes that this education should be less about learning the status quo and more about how the next generation of lawyers can change it.

There’s widespread acknowledgment that justice is often meted out unfairly; decades of scholarship have shown how social biases based on race, gender, corporate interests, or ideology find their way into written laws. Nevertheless, Hanson says, law school classes don’t always give students the tools to counteract injustices. “My students have expressed increasing amounts of frustration with the fact that many of our biggest problems are not being addressed by the legal system,” he says. Lipton was one of those students. After graduating in 2014, he turned down a fellowship in Washington, D.C., to stay at Harvard and help Hanson see the new project through.

One of their targets is the case method of legal education, which has been the dominant form of teaching law in America since it was introduced at Harvard Law School by Christopher Columbus Langdell after he became its dean in 1870. Rather than lecturing his students, Langdell asked them to examine judicial cases of the past. Then, through a process of Socratic questioning, he would challenge students to explain their knowledge and interpretation of a case, allowing them to glean the deeper principles of the law, much like a scientist would examine evidence.

Though the case method has evolved since the 19th century, the primary text of most classes is still the casebook—a set of legal decisions chosen for their ability to illustrate legal principles. Professors who embrace it say this approach forces students—particularly first-year students with little legal training—to think like lawyers. “Within a few weeks, I have reprogrammed their brains,” says Bruce Mann, a law professor at Harvard who’s known for his rapid-fire questions in class. “That doesn’t mean that it is backward-looking. I’m really teaching them how to think.” Mann, like many professors these days, tries to put cases in a larger historical and social context.

Jon Hanson Jacob Lipton

But Hanson and Lipton believe that the case method, while helpful in the hands of skilled teachers, puts too much emphasis on what the law already is, rather than what it should be. It tends to assume that decisions of the past are fair and appropriate. Instead, says Lipton, “we think that legal education should start with what the problems are in the world.”

They also take issue with the way that law gets divided into categories—tax law, criminal law, property law, torts, contracts—each with different professors and different casebooks. Douglas Kysar, a law professor at Yale Law School and former student of Hanson’s who has embraced his interdisciplinary approach to the law, says that these divisions can hinder tackling issues that existing laws don’t address, and permits problems that run across disciplines to go unaddressed. “In each one of those fields, we often try to present the cases and materials as if they’re an efficient and fair whole,” he says. When something arises to challenge that picture, professors can pass the buck. For instance, in environmental law, one of Kysar’s specialties, it’s not always clear where the responsibility to fix a problem like pollution lies. “Everyone’s pointing their fingers at other systems that are supposed to address a harm,” he says. “There’s no place where you’re looking at the systems in a cross section.”

Hanson and Lipton also argue that the law focuses too much on the actions and disputes of individuals—and not even on an accurate vision of how individuals behave. “In many cases, the focus on the individual obscures what the actual problem is or what the solutions are,” Lipton says. Hanson, meanwhile, has long argued that the vision of the individual that exists in law isn’t well backed up by research. He directs the Project on Law and Mind Sciences at Harvard, which brings findings from social psychology and social cognition to bear on the law. The law generally treats people as rational actors making decisions based on their own knowledge and beliefs. In fact, Hanson says, research has shown that people are easily swayed by their circumstances. Through their academic writing and on a blog called The Situationist, Hanson and a growing group of like-minded scholars have argued that solving systemic problems means focusing more on forces that act on us, rather than assigning blame and punishment to individual actors.

A systemic approach to racial bias in policing, for instance, might look at psychological research on unconscious racial bias, police training techniques, and law enforcement policies in order to create a more just system, rather than on the actions of a specific officer. For the problem of rising student debt, another complex issue that students in the Justice Lab think tank are likely to tackle, it might look at federal loan systems that allow for-profit colleges to put students in debt without providing enough value in return. Another example is obesity and the food system; a systemic approach would look at ways that advertising, agricultural subsidies, supermarket zoning, and food service practices create an unhealthy system for consumers. “We want to examine the role that large commercial interests play in shaping laws,” Hanson says. Solutions might involve class actions, new regulations, or institutional changes.

. . . .Hanson thinks that the idea of systemic justice resonates now in a way that it hasn’t always in the past. “I think that is a reflection of the change in the mood in the country and in this generation of law students,” he says.

* * *

The Systemic Justice Project, though unique in some ways, is part of a larger effort introduce a policy focus into law school—Stanford Law School, for instance, recently launched a Law and Policy Lab that asks students to find policy solutions to real-world problems. “Traditionally, law school education has been doctrinal,” says Sergio Campos, a law professor at the University of Miami and visiting professor at Harvard. “You teach students what the law is and how to apply it.” . . . . “When you get to a position where you can change the law, you don’t have a background on policy and what it should be,” Campos says.

* * *

. . . Harvard law professor David Rosenberg . . . . believes law schools often leave students unprepared to think broadly. “Over and over again in my many decades at Harvard, students have told me that my advice contradicts their instruction in other courses that making social policy arguments is a confession of weakness in your legal position, and should be done, if ever, only as a last resort,” he says. “We’re de-training them.”

Rena Karefa-Johnson, a second-year student who’s signed up for both the Systemic Justice class and the Justice Lab, admits that some students simply want to learn existing law and don’t appreciate Hanson’s approach. But it’s been popular with students like her who are already active in fighting for social causes. “The law is inherently political,” she says. “He does not allow his students to learn the law outside of its context.”

Read entire article here.

Posted in Education, Law, Legal Theory, Public Policy, Situationist Contributors | Leave a Comment »

Stanford Prison Experiment – The Movie

Posted by The Situationist Staff on February 2, 2015

From ETonline:

The Stanford Prison Experiment, which premiered this week at Sundance to mostly positive reviews, is not always an easy film to watch.

Much of the action takes place in barren 6-foot-wide hallway. The characters–seemingly normal and well-adjusted Stanford students recruited to participate in a landmark 1971 study about the psychology of imprisonment–take their role-playing as prisoner and guard to extremes, turning power-hungry, violent and occasionally sadistic. The “grown-ups,” led by researcher Philip Zimbardo (played by Billy Crudup), watch a live feed of the action from a nearby office and fail to stop the abuse–fueled by their own power trips and unchecked ambition.

None of the men or boys come off looking very good in the film, though director Kyle Patrick Alvarez does a masterful job humanizing them. And it’s impossible to watch without wondering how you’d react if parachuted into Zimbardo’s simulated prison. Would you stand up for yourself–or for the humanity of others? And can we really know until we’ve been there?

“One of the big questions this film deals with is, ‘Are we who we think we are?’” Crudup said when we sat down in Park City, Utah, this week to discuss the film. “This story talks about the ways we don’t fulfill our own moral capacity, and that what we think of as our true self is actually the product of many different situations, institutions, and places.”

Crudup (Almost Famous) is excellent as Dr. Zimbardo, a man who so badly wants to affect positive change in the world–and have an impact as a psychologist–that he’s willing to let his study subjects endure psychological torture for what he perceives as a greater good. It isn’t until the sixth day, when his girlfriend and fellow researcher (played by Juno’s Olivia Thirlby) objects to the experiment’s direction, that he finally accepts the damage he’s doing.

Read entire review here.

Related Situationist posts:

Posted in Situationist Contributors, Social Psychology, Video | 1 Comment »

Mahzarin Banaji on “Group Love”

Posted by The Situationist Staff on November 17, 2013

SillimanLecturePoster

From Yale News (by Phoebe Kimmelman):

On Thursday evening, Harvard psychologist Mahzarin Banaji delivered a talk entitled “Group Love” where she demonstrated that the audience held an implicit bias for Yale over Princeton.

Banaji, who worked as a professor of psychology at Yale from 1986-2002 before taking a similar post at Harvard, focused in her talk on how group affiliations, or lack thereof, affect the ways in which we see the world and interact with others. In her research, Banaji has helped bring Freudian theories of the subconscious in the psychology laboratory to be empirically tested.

University President Peter Salovey delivered introductory remarks, saying Banaji had been the “heart and soul” of the Yale psychology department during her 16 years there.

“She is of those scientists who changes her field with her insights and her empirical data with a deep sense of social responsibility to her colleagues, her students and her field,” Salovey said.

In the lecture to roughly 100 people, Banaji first discussed an experiment she did in 2006 at Harvard that involved monitoring participants’ brain activity while they answered random questions about two hypothetical people, presented with only their political preferences. Neuroimaging showed that the subjects used different areas of the brain to make predictions about people with whom they agree and those with whom they disagree. Banaji used this study to introduce the idea of love of the in-group, a preference people have for a group of people who think the way that they themselves do.

Through presenting multiple studies, Banaji demonstrated the magnitude of positive bias towards the in-group in subjects ranging from sports fans to elementary school students. While we may not be able to eliminate our biases, Banaji said certain cognitive strategies can “outsmart” them. For instance, Banaji said she rotates among her computer screensavers images that defy racial and gender stereotypes.

“It’s not that we hate people of another group, but it’s love for the in-group that’s paramount,” she said.

Salovey and Banaji, who started as faculty at Yale on the very same day, were close friends and next door neighbors, he said. Salovey recalled that he and Banaji were each other’s “support systems” while writing PSYC 110 lectures together.

Banaji came to campus for this year’s Silliman Memorial Lecture, an annual speakership that began in 1888 and has brought such prominent scientific figures to campus as J.J. Thomson and Ernest Rutherford. Though a committee of faculty from Yale science departments usually chooses a speaker whose research is in the hard natural sciences, committee chair and Sterling professor of molecular biophysics and biochemistry Joan Steitz said that her colleagues were eager to hear from Banaji this year. Though the lecture has no affiliation with Silliman College, the endowment is named for the mother of Benjamin Silliman, a scientist after whom the college is named.

“If you think about the impact that psychology and neurobiology and brain science [are] having these days, the committee did not consider it at all inappropriate to be going in that direction with this particular lecture,” Steitz said.

Since leaving Yale in 2002, Banaji has served as a professor of social ethics in Harvard’s psychology department, where she has continued her research on how unconscious thinking plays out in social situations.

Nick Friedlander ’17 said he found the lecture “eye-opening” because it revealed biases he did not know he held before.

For Zachary Williams ’17, the lecture demonstrated how little of the conscious mind controls mental processes.

“It was truly a treat to be able to sit in close quarters with such a fantastic paragon of academia and hear her talk about such relevant topics,” he said.

Banaji’s most recent book is entitled “Blindspot: Hidden Biases of Good People.”

Related Situationist posts:

Posted in Emotions, Implicit Associations, Morality, Neuroscience, Situationist Contributors | 1 Comment »

“The Future of Equality” ACS Conference – This Weekend

Posted by The Situationist Staff on November 15, 2013

The American Constitution Society Northeast Regional Student Convening will bring together lawyers and law students from across the Northeast to hear leading practitioners, government officials, judges, and academics discuss a progressive vision for the future of equality across a number of salient policy issues. Learn more here.

acs future of equality poster 2013

acs 2013 schedule

Posted in Events, Politics, Situationist Contributors | Leave a Comment »

Natasha Schvey on Obesity in the Courtroom – Today!

Posted by The Situationist Staff on November 1, 2013

weight bias courtroom - by Madelein witt

When: Friday 11/01/13 12-1pm
Where: WCC 2012

Today, join Section 6’s Ninja Tortles and the Student Association for Law and Mind Sciences (SALMS) for a talk by Natasha Schvey on bias against overweight defendants in the courtroom. Schvey, a doctoral student in clinical psychology at Yale University, has focused her research on obesity, weight stigma, binge eating, and eating in response to negative affect. She argues that bias against the overweight should be addressed in the legal system through means such as juror selection, jury instructions, and anti-discrimination legislation.

The talk will be held at noon in WCC 2012, and food will be served.

Posted in Events, SALMS, Situationist Contributors | Leave a Comment »

Conference on the Legacy of Stanley Milgram

Posted by The Situationist Staff on October 25, 2013

shock generator2

Yale Law School is hosting a conference on the Legacy of Stanley Milgram this Saturday.  Unsurprisingly, many Situationist Contributors (Thomas Blass, Jon Hanson, Dan Kahan, and Tom Tyler) and Situationist friends (Phoebe Ellsworth, Doug Kysar, and Jaime Napier) will be participating.  The conference agenda is below.

Saturday, October 26, 2013
Yale Law School
Sponsored by the Oscar M. Ruebhausen Fund

9:00-9:30
Registration and Breakfast

9:30-10:00
Introduction
Peter Salovey, President of Yale University

10:00-11:00
The role of situational forces in shaping false confessions
Saul Kassin, Distinguished Professor of Psychology, Department of Psychology, John Jay College of Criminal Justice
Moderator: Marcia K. Johnson, Sterling Professor of Psychology, Department of Psychology, Yale University=

11:00-12:00
Situationism in law
Jon D. Hanson, Alfred Smart Professor and Director, Project on Law and Mind Sciences, Harvard Law School
Moderator: Douglas Kysar, Joseph M. Field ’55 Professor of Law, Yale Law School

12:00-12:15    Pick up box lunch12:15-1:00
Reflections on the life and work of Stanley Milgram
Thomas Blass, Professor of Psychology, University of Maryland, Baltimore County, and author of The Man Who Shocked the World
Moderator: Tom Tyler, Macklin Fleming Professor of Law and Professor of Psychology, Yale Law School and Department of Psychology, Yale University

1:00-2:00
Obedience to authority, Thoughts on Milgram as a filmmaker
Kathryn Millard, Professor of Film and Creative Arts, Department of Media, Music, Communication and Cultural Studies, Macquarie University, Sydney, AU
Moderator: Sarah Ryan, Empirical Research Librarian & Lecturer in Legal Research, Yale Law School

2:00-3:00
Inattentive bureaucrats or engaged followers? Understanding Milgram’s subjects
S. Alex Haslam, Professor of Psychology and ARC Laureate Fellow, School of Psychology, The University of Queensland, St. Lucia, AU
Moderator: Jaime Napier, Assistant Professor of Psychology, Department of Psychology, Yale University

3:00-4:00
Milgram’s legacy in social psychology
Phoebe Ellsworth, Frank Murphy Distinguished University Professor of Law and Psychology, University of Michigan
Moderator: Dan Kahan, Elizabeth K. Dollard Professor of Law and Professor of Psychology, Yale Law School

Related Situationist posts:

Posted in Classic Experiments, Events, Situationist Contributors, Social Psychology | Leave a Comment »

Jon Hanson on Law and Mind Sciences – SALMS Talk Monday!

Posted by The Situationist Staff on October 20, 2013

law mind sciences hanson

When: Monday 10/21/13 12-1pm
Where: WCC 1010

Professor Jon Hanson will kick off this year’s SALMS speaker series, discussing the significance of mind sciences for law.

Hanson is the Alfred Smart Professor of Law, Director of the Project on Law and Mind Sciences, and editor of the recent book, “Ideology, Psychology, and Law.”

Lunch will be provided.

Posted in Events, SALMS, Situationist Contributors | Leave a Comment »

A New Situationist Fellow – Fábio Almeida

Posted by The Situationist Staff on October 13, 2013

Fabio AlmeidaWe are happy to introduce a new Situationist Fellow, Fábio Almeida.

Fábio Portela L. Almeida is a 2003 graduate at Universidade de Brasília Law School in Brazil. After graduating, he worked as a lawyer and, in 2006, he has been working as a Clerk in the Brazilian Superior Court of Labour Law. He also earned a Master of Laws Degree in 2007 at the same university, where he wrote a dissertation about constitutional issues arising from religious teaching in Brazilian public schools, which was published as a book in 2008.

In 2011, he earned a M.Phil Degree at the Universidade de Brasília Department of Philosophy. His dissertation, “The evolution of a normative mind: origins of human cooperation,” awarded the ANPOF Prize of best philosophical dissertation in the biennium 2010/2011. Currently, Fábio is a SJD Candidate at the Universidade de Brasília Law School and a Visiting Researcher at Harvard Law School. His research interests are related to the interdisciplinary relationship between legal theory, biology, psychology, moral philosophy, economics, sociology and anthropology.

In his free time, Fábio enjoys writing about stock investing in his personal blog, listening to classical music, reading, traveling, and watching movies.  Fabio is a long-term reader of The Situationist, and we are delighted that he is visiting HLS for the year and contributing to the blog as a fellow.  Look for his first post soon.

Posted in Evolutionary Psychology, Situationist Contributors | Tagged: | Leave a Comment »

2013 SPSP Awards

Posted by The Situationist Staff on October 11, 2013

marc-sheff-psychology-trophy_web

From SPSP Website:

September 18, 2013 – When you pass by a stranger in need of help, do you stop to lend a hand? Maybe not… A landmark 1973 study found that seminary students in a hurry were less likely to help someone in distress, even when they were on their way to deliver a talk on the Parable of the Good Samaritan. A co-author of that study and several other distinguished researchers are the recipients of the 2013 annual awards from the Society for Personality and Social Psychology (SPSP). The contributions of these scientists to personality and social psychology include furthering our understanding of how personality shapes health and well-being across adulthood, why it’s so hard to evaluate ourselves, and the virtues that divide political ideologies.

The Society’s highest awards – the Jack Block, Donald T. Campbell, and Distinguished Scholar awards – go to Robert R. (“Jeff”) McCrae, retired from the National Institute of Aging, [Situationist Contributor] Timothy D. Wilson of the University of Virginia, and Carol S. Dweck of Stanford University, respectively. The Career Contribution awards, which honor scholars whose research has led the field in new directions, are C. Daniel Batson of the University of Kansas and [Situationist friend] James Sidanius of Harvard University.

Good Samaritan, Social Dominance

Batson co-authored with [Situationist Contributor] John Darley the 1973 study on the “bystander effect” – revealing processes that influence how and when we help people. His work looks at a variety of topics that bridge psychology and religion, including altruism, empathy, and compassion. Batson is leading proponent for the existence of pure or selfless altruism, in which people help out of a genuine concern for the welfare of others.

Sidanius’ work explains the acceptance of group-based social hierarchy – such as believing that men are superior to women or that Whites are superior to people of color – by both the dominant and oppressed groups. Long before others were convinced, Sidanius analyzed the inevitability and the significance of hierarchy in structuring society, social relations, and psychological functioning – pioneering the study of the widely shared cultural ideologies that provide the justification for group-based hierarchies.

Personality, Self-Insight, and Mindset

McCrae’s work on personality in aging adults led to a resurgence of personality psychology in the 1980s and the establishment of the Big Five model of personality traits that persists today. His work has shown how individual differences in personality traits effect everything from health to coping. McCrae has established new ways of measuring personality traits and has looked at the effects of personality cross-culturally. Recently, he has written provocative papers on the future of personality psychology for the 21st century, including exploring the molecular genetics of personality dispositions.

Wilson’s research examines why it is so hard for people to accurately evaluate themselves. He has shed light into the ways in which people are mistaken about themselves, whether wrong about the causes of their past actions or about their present attitudes. His book Stranger to Ourselves explored the challenges of self-insight. An Elected Fellow in the American Association for the Advancement of Science and an Elected Member of the American Academy of Arts and Sciences, Wilson works to ensure that public policy is informed by scientific fact.

Dweck’s work has examined how people’s mindsets shape their lives and determine their achievement. In a series of well-known studies, Dweck demonstrated how people with a “growth mindset,” who believe that certain qualities, such as intelligence, can be developed make life choices that lead to greater success than those with a “fixed mindset,” who believe that basic abilities are unchangeable. This distinction profoundly affects people’s motivation, psychological well-being, and learning, and the ideas have been extended to apply to work in diverse areas, such as education and intergroup relations.

Math and Science Intervention, Political Ideologies, Hidden bias

An intervention aimed at parents can boost children’s interest in math and science, according the study awarded this year’s Robert B. Cialdini Award for excellence in a published field study. Judith Harackiewicz of the University of Wisconsin, with colleagues Christopher Rozek, Chris Hulleman, and Janet Hyde, sent to parents of high-school students information that emphasized the importance of mathematics and science to college, career, and everyday life, and that offered tips for parents to communicate this importance to their children. Compared to a control group, children whose parents received the information took nearly a full extra semester of math and science. The paper, “Helping parents to motivate adolescents in mathematics and science: An experimental test of a utility-value intervention,” was published in Psychological Science. Honorable Mention for the Cialdini Award goes to “Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end,” by Lisa L. Shu and colleagues in the Proceedings of the National Academy of Sciences.

The recipient of the Media Book Prize is Jonathan Haidt for The Righteous Mind, which takes a tour of how people bind themselves to political and religious teams and the moral narratives that accompany them. Using a range of arguments – anthropological, psychological, and evolutionary – Haidt proposes that the U.S. political left and the right emphasize different virtues and he suggests that we use that discovery to try to get along.

The Methodological Innovation Award goes to Anthony G. Greenwald of the University of Washington, who in 1998 created the Implicit Association Test (IAT) – a widely-used method for measuring attitudes, stereotypes, self-concepts, and self-esteem without relying on self-reporting. Researchers have used the IAT in fields ranging from education and health to forensics and marketing. Tens of thousands of people weekly visit the Project Implicit website, created by Greenwald and colleagues.

Recipients of the Carol and Ed Diener Award in Personality Psychology and the Carol and Ed Diener Award in Social Psychology are Andrew J. Elliot of the University of Rochester and Nalini Ambady of Stanford, respectively. Elliot studies achievement and social motivation, particularly in educational contexts, and focuses on how approach and avoidance temperaments, motives, and goals influence psychological functioning. Ambady’s work looks at “thin slices” – showing that social, emotional, and perceptual judgments made on the basis of brief behavioral observations can be surprisingly accurate.

The remaining SPSP awards for 2013 are as follows:

  • The 2013 SPSP Award for Service on Behalf of Personality and Social Psychology: Kay Deaux of City University of New York and Hazel Rose Markus of Stanford. A great mentor and supporter of diversity in the field, Deaux’s pioneering work looks at gender, identity, and immigration, reflecting her deep social consciousness. Markus has worked to create the field of cultural psychology – shifting it from the assumption that research findings in one culture represent basic processes of human nature, to the idea of linking different social and personality processes to gender, race, social class, age, and culture.
  • The 2013 SPSP Service Award for Distinguished Service to the Society: Wendi Gardner of Northwestern University and George (Al) Goethals of the University of Richmond’s Jepson School of Leadership Studies. Through her roles with the Society, Gardner has played a vital role in shaping the organization’s annual conferences and also has served as a passionate advocate for graduate students. As Secretary-Treasurer of SPSP (1995-1997), Goethals shepherded the Society through lean financial times, helping it to establish a solid financial foundation.
  • The 2013 Theoretical Innovation Prize: Kurt Gray, Liane Young, and Adam Waytz for their 2012 Psychological Inquiry article entitled “Mind Perception is the Essence of Morality.” The paper proposes a simplification in the way psychologists view moral judgment.

A ceremony at the 2014 annual SPSP conference in Austin, TX (Feb. 13-15, 2014) will honor all of this year’s award recipients. Full citations are available online.

Image by Marc Sheff.

Posted in Awards, Situationist Contributors, Social Psychology | Leave a Comment »

The Good Feeling of Fast Thinking

Posted by The Situationist Staff on October 6, 2013

Moose

Situationist Contributor Emily Pronin’s recent articles, When the mind races: Effects of thought speed on feeling and action. Current Directions in Psychological Science, 22, 283–288, was highlighted in a recent APS Observer column.   Here is an excerpt containing a helpful overview of Pronin’s fascinating study and findings.

You wake up. Your phone blinks. You touch the screen, slide your finger, and chills shiver down your spine. “See me tomorrow,” says the email your boss sent at midnight. Your thoughts accelerate. “What does she want? Why did she write so late? Am I in trouble? The company is in trouble. This down economy! I’m getting fired. Why me? Where will I work? I have skills. There are other companies. I have no skills. Where will I apply? Can we move? What will my parents think? How will the kids react to changing schools? I can do this. We can do this. No matter what.”

We think. It helps us. Errands, plans, and goals require thought. Synapses fire. Action potentials race down axons. Chemicals bathe our brains with neurotransmitters. Thoughts guide action, from ordering a coffee to avoiding predators. What we think matters. But according to Emily Pronin of Princeton University, how fast we think matters, too.

Making people think fast boosts their happiness, energy, riskiness, and self-confidence. In an impressive program of research, Pronin and colleagues have documented these effects using many ways to speed up thinking. In one study, participants read trivia statements at fast or slow speeds (Chandler & Pronin, 2012). Next, they completed a risk-taking task. Participants could earn money — but only if they didn’t take too many risks. Fast-thinking participants took the most risks and earned the least money. On the bright side, having people read at twice their normal reading speed increased their positive emotion (Pronin & Wegner, 2006).

Pronin (2013) argues that fast thinking prepares people to take immediate action. Feeling good nudges that process along, as does increased energy. If you spy a moose while running on a trail, it will behoove you to take swift and confident action even if it involves some risk. You may even experience an “a-ha” moment that provides a creative solution you would not have considered if you were thinking at a normal or slow pace (Yang & Pronin, 2012).

Read the entire column here.

Image from Flickr.

Other Situationist posts about Emily Pronin’s work:

Posted in Emotions, Positive Psychology, Situationist Contributors | Leave a Comment »

Wegstock #4 – Tim Wilson

Posted by The Situationist Staff on July 29, 2013

In 2011, a conference honoring Dan Wegner, “Wegstock,” was held at Harvard University.

Speakers include Dan Gilbert, Susan Fiske, Tim Wilson, Jon Haidt, Henk Aarts, Nick Epley, Bill Swann, Todd Heatherton, Thalia Wheatley, Ap Dijksterhuis, Jon Krosnick, Jerry Clore, Bill Crano, Robin Vallacher, Jamie Pennebaker, Jonathan Schooler and Dan Wegner.

The talks are brief and are well worth watching.  We will highlight the individual talks, roughly 15 minutes each, over the next month.

In this video, Situationist Contributor Timothy Wilson discusses the field and his research and a little bit about his book, Redirect.

To review a collection of Situationist posts discussing Dan Wegner’s research, click here. Click here for Situationist posts about Tim Wilson’s research.

Posted in Situationist Contributors, Social Psychology, Video | Leave a Comment »

Independence Day: Celebrating Courage to Challenge the Situation

Posted by The Situationist Staff on July 3, 2013

First Published on July 3, 2007:

Battle of Lexington

With the U.S. celebrating Independence Day — carnivals, fireworks, BBQs, parades and other customs that have, at best, only a tangential connection to our “independence,” — we thought it an opportune moment to return to its source in search of some situationism. No doubt, the Declaration of Independence is typically thought of as containing a dispositionist message (though few would express it in those terms) — all that language about individuals freely pursuing their own happiness. Great stuff, but arguably built on a dubious model of the human animal.

Declaration of IndependenceThat’s not the debate we want to provoke here. Instead, we are interested in simply highlighting some less familiar language in that same document that reveals something special about the mindset and celebrated courage of those behind the colonists’ revolt. Specifically, as Thomas Jefferson penned, “all experience hath shewn that mankind are more disposed to suffer, while evils are sufferable than to right themselves by abolishing the forms to which they are accustomed.”

Part of what made the July 4th heroes heroic, in our view, was their willingness to break from that disposition to suffer evils. They reacted, mobilized, strategized, resisted, and fought because they recognized that their suffering was not legitimate — a conclusion that many in the U.S. and abroad vehemently rejected.

Situationist contributor John Jost has researched and written extensively about a related topic — the widespread tendency to justify existing systems of power despite any unfair suffering that they may entail. As he and his co-authors recently summarized:

Whether because of discrimination on the basis of race, ethnicity, religion, social class, gender, or sexual orientation or because of policies and programs that privilege some at the expense of others, or even because of historical accidents, genetic disparities, or the fickleness of fate, certain social systems serve the interests of some stakeholders better than others. Yet historical and social scientific evidence shows that most of the time the majority of people – regardless of their own social class or position – accept and even defend the legitimacy of their social and economic systems and manage to maintain a “belief in a just world.”

If we truly want to emulate and celebrate the “founding fathers” of this republic, perhaps we should begin by taking seriously the possibility that what “is” is not always what “ought to be.”

Happy Fourth!

* * *

To read a couple of related Situationist posts, see “Thanksgiving as “System Justification”?” and “Patriots Lose: Justice Restored!

Posted in History, Ideology, Situationist Contributors, Social Psychology | Tagged: , , | Leave a Comment »

The Situation of Questions about NFL Players’ Sexual Orientation

Posted by The Situationist Staff on June 17, 2013

SI Loaded Question 3Last week the National Football League Players’ Association announced it would sell t-shirts with a gay pride theme.  A number of players have agreed to have their names on the t-shirts.  This is a positive step for the NFL, which as Situationist contributor Michael McCann wrote about earlier this year for Sports Illustrated, has seen fallout from its teams asking prospective players about their sexual orientation.  Here is an excerpt of McCann’s article “Loaded Question“, which appeared on page 16 in the March 25, 2013 issue of SI.

* * *

In a March 14 letter to NFL commissioner Roger Goodell, New York attorney general Eric Schneiderman inquired why, during last month’s scouting combine, several college players were allegedly asked about their sexual orientation. Notre Dame linebacker Manti Te’o denied reports that he had faced such queries, but Colorado tight end Nick Kasa said a team wanted to know if he “likes girls.” Kasa’s isn’t the first case of offensive predraft questioning. In 2010, Dolphins G.M. Jeff Ireland asked Dez Bryant if his mother was a prostitute. (Ireland later apologized.)

The NFL asserts that such questions violate existing league policies and are subject to discipline. A league spokesperson also says that the questioning of prospects was to be discussed at this week’s owners meeting.

Are the NFL and the players association doing enough to protect prospects from biased questions? Article 49 of the current CBA declares, “There will be no discrimination in any form against any player … because of … sexual orientation.” But is a draft prospect who is not yet a member of the NFLPA or of an NFL team—and may never become one—fully protected by Article 49?

* * *

To read the rest, click here.  For other Situationist posts on homophobia, click here.

Posted in Law, Life, Situationist Contributors, Situationist Sports | Leave a Comment »

Not Your Granparents’ Prejudice

Posted by The Situationist Staff on April 26, 2013

Blind Spot Book CoverFrom NPR’s Code Switch (by Shankar Vedantam) a story about Situationist Contributor Mahzarin Banaji and Situationist friend Tony Greenwald.

Harvard psychologist Mahzarin Banaji was once approached by a reporter for an interview. When Banaji heard the name of the magazine the reporter was writing for, she declined the interview: She didn’t think much of the magazine and believed it portrayed research in psychology inaccurately.

But then the reporter said something that made her reconsider, Banaji recalled: “She said, ‘You know, I used to be a student at Yale when you were there, and even though I didn’t take a course with you, I do remember hearing about your work.’ “

The next words out of Banaji’s mouth: “OK, come on over; I’ll talk to you.”

After she changed her mind, got to thinking. Why had she changed her mind? She still didn’t think much of the magazine in which the article would appear. The answer: The reporter had found a way to make a personal connection.

For most people, this would have been so obvious and self-explanatory it would have required no further thought. Of course, we might think. Of course we’d help someone with whom we have a personal connection.

For Banaji, however, it was the start of a psychological exploration into the nature and consequences of favoritism — why we give some people the kind of extra-special treatment we don’t give others.

In a new book, , Banaji and her co-author, Anthony Greenwald, a social psychologist at the University of Washington, turn the conventional way people think about prejudice on its head. Traditionally, Banaji says, psychologists in her field have looked for overt “acts of commission — what do I do? Do I go across town to burn down the church of somebody who’s not from my denomination? That, I can recognize as prejudice.”

Yet, far from springing from animosity and hatred, Banaji and Greenwald argue, prejudice may often stem from unintentional biases.

Take Banaji’s own behavior toward the reporter with a Yale connection. She would not have changed her mind for another reporter without the personal connection. In that sense, her decision was a form of prejudice, even though it didn’t feel that way.

Now, most people might argue such favoritism is harmless, but Banaji and Greenwald think it might actually explain a lot about the modern United States, where vanishingly few people say they hold explicit prejudice toward others but wide disparities remain along class, and gender lines.

Anthony Greenwald is a social psychologist and a professor at the University of Washington.

Jean Alexander Greenwald/Delacorte Press

The two psychologists have revolutionized the scientific study of prejudice in recent decades, and their — which measures the speed of people’s hidden associations — has been applied to the practice of , law and other fields. Few would doubt its impact, including . (I’ve written about and Greenwald’s work before, in this and in my 2010 book, .)

“I think that kind of act of helping towards people with whom we have some shared group identity is really the modern way in which discrimination likely happens,” Banaji says.

In many ways, the psychologists’ work mirrors the conclusion of another recent book: In , sociologist asks how it is that few people report feeling racial prejudice, while the United States still has enormous disparities. Discrimination today is less about treating people from other groups badly, DiTomaso writes, and more about giving preferential treatment to people who are part of our “in-groups.”

The insidious thing about favoritism is that it doesn’t feel icky in any way, Banaji says. We feel like a great friend when we give a buddy a foot in the door to a job interview at our workplace. We feel like good parents when we arrange a class trip for our daughter’s class to our place of work. We feel like generous people when we give our neighbors extra tickets to a sports game or a show.

In each case, however, Banaji, Greenwald and DiTomaso might argue, we strengthen existing patterns of advantage and disadvantage because our friends, neighbors and children’s classmates are overwhelmingly likely to share our own racial, religious and socioeconomic backgrounds. When we help someone from one of these in-groups, we don’t stop to ask: Whom are we not helping?

Banaji tells a story in the book about a friend, , now a professor at Northeastern University. . . .

Read or listen to the rest of the story here.

Related Situationist posts:

Go to Project Implicit here.  Take the Policy IAT here.

To review all of the previous Situationist posts discussing implicit associations click on the “Implicit Associations” category in the right margin, or, for a list of such posts, click here.

Learn more about the book, Blind Spot, here.

Posted in Book, Implicit Associations, Life, Marketing, Situationist Contributors, Social Psychology | Leave a Comment »

The Interior Situation of the Climate Change Skeptic

Posted by The Situationist Staff on April 23, 2013

global warming from davidllorito.blogspot.com/search/label/governanceFrom the APS Observer, an article by Situationist Contributor John T. Jost and Erin P. Hennes

A multitude of environmental scientists, among others, worry that future generations will look back at the present era as one in which the human race could have — and should have —taken decisive action to prevent (or at least mitigate) the most menacing costs associated with global climate change. According to public opinion surveys, however, only 38 percent of Americans believe that global warming will seriously affect them or their way of life (Newport, 2012), and 42 percent continue to believe that global warming claims are “generally exaggerated” (Saad, 2012). When it comes to beliefs about climate change, men are more skeptical than women, and political conservatives are more skeptical than liberals. In a Gallup survey conducted in 2010, 42 percent of men and only 30 percent of conservatives agreed that “effects of global warming are already occurring,” as compared to 56 percent of women and 74 percent of liberals (Jones, 2010; see also McCright & Dunlap, 2011).

In a recent book, the philosopher Stephen Gardiner (2011) argues that environmental inaction is the consequence of a “perfect moral storm.” Specifically, he points to the conjunction of three unfortunate causes: 1) a tendency for the richer nations of the world to foist the burden of environmental risks upon poorer nations; 2) the present generation’s temptation to defer the costs of the crisis to future generations; and 3) pervasive ignorance concerning science, ethics, international justice, and the interdependence of life. Gardiner writes that the last factor “not only complicates the task of behaving well, but also renders us more vulnerable to the first two storms” (p. 7). Gardiner provides an astute analysis of the problem of environmental inaction, but he overlooks the possibility that climate change denial may not merely result from ignorance. Rather, many members of the public may possess a relatively strong motivation to deny and minimize environmental realities. Specifically, our research team has found that the social psychological motivation to defend, bolster, and justify aspects of the status quo — what we refer to as system justification (see, e.g., Jost, Banaji, & Nosek, 2004) — contaminates the public’s understanding of anthropogenic climate change.

In research published in 2010, we discovered that individuals who score higher on Kay and Jost’s (2003) General System Justification scale (which measures responses to statements such as “Most policies serve the greater good,” and “In general, the American political system operates as it should”) exhibit greater denial of environmental problems and vulnerabilities. Furthermore, system justification statistically mediates the effects of gender and political ideology on support for the environment. That is, men and conservatives are more likely than women and liberals to believe that American society is fair and legitimate, and these differences in system justification explain, at least in part, why they are so skeptical about climate change and are reluctant to take pro-environmental action (Feygina, Jost, & Goldsmith, 2010; see also Feinberg & Willer, 2011).

More recently, we have conducted a series of studies corroborating the hypothesis that system justification motivates skepticism about climate change. Specifically, we have found that the denial of environmental problems is facilitated by information-processing distortions associated with system justification that affect evaluation, recall, and even tactile perception (Hennes, Feygina, & Jost, 2011). In one study, we found that individuals who scored higher (vs. lower) on Jost and Thompson’s (2000) Economic System Justification scale (which measures responses to such statements as “If people work hard, they almost always get what they want,” and “It is unfair to have an economic system which produces extreme wealth and extreme poverty at the same time,” reverse-scored) found messages disparaging the case for global warming to be more persuasive, evaluated the evidence for global warming to be weaker, and expressed less willingness to take action to curb global warming.

In a second study, we extended these findings by demonstrating that motivated processing biases recall of information about climate change. Specifically, we exposed research participants to clips from a televised newscast and later asked them to recall details from the program and to evaluate scientific evidence concerning climate change. Once again, we found that high system-justifiers evaluated the quality of the evidence to be weaker, were less likely to believe that climate change is occurring, and viewed it as a less important policy issue, in comparison with low system-justifiers. High system-justifiers also recalled the information to which they had been exposed as less serious (i.e., remembering smaller increases in global temperatures, lower sea levels, and less reliable historical data concerning climate change) than did low system-justifiers. Poorer recall was associated with skepticism about climate change. Thus, individuals who misremembered the evidence provided in the video to be less severe were less likely to support efforts to address climate change.

In an experimental investigation, we demonstrated that temporarily activating system-justification motivation produced memory biases and exacerbated skepticism about global climate change. More specifically, we adapted a system-dependence manipulation developed by Kay, Gaucher, Peach et al. (2009; see also Shepherd & Kay, 2012) and found that when people were led to believe that the political system exerted a strong (vs. weak) impact on their life circumstances, they were more likely to misremember details from a newspaper article they read earlier in the session. Importantly, all of the memory errors were in a system-exonerating direction: The proportion of man-made carbon emissions was recalled as being less than actually reported, and the scientists who reported errors in the much-maligned 2007 report by the Intergovernmental Panel on Climate Change were misidentified as skeptics rather than believers in anthropogenic climate change (Hennes et al., 2011).

We have discovered that system-justification motivation can even affect perceptions of ambient temperature. Our research assistants approached pedestrians in New York’s Washington Square Park during the summer months and asked them a series of questions, including their estimates of the temperature outside. Individuals who scored high on system justification or who were assigned to a high system-dependence condition reported that the current temperature was significantly lower than did individuals who scored low on system justification or who were assigned to a low system-dependence condition. These findings suggest that people may be motivated to feel (or not feel) the evidence of global warming when system-justification needs are either chronically or temporarily heightened.

Berkeley physicist Richard Muller, a former skeptic of anthropogenic climate change, made headlines last summer when he declared that not only is climate change real, but that “humans are almost entirely the cause” (Muller, 2012). If catastrophic events like Hurricane Sandy become more common, they may shift hearts and minds, albeit slowly. Given economic and other crises facing the nation (many of which probably exacerbate system-justification motivation), it still remains to be seen whether Americans and their elected officials will follow suit in embracing the scientific consensus. Climate change was a non-issue during the 2012 election campaign, and President Obama (2013) was criticized resoundingly by Senator Marco Rubio and other conservatives for emphasizing the issue in his most recent State of the Union speech. Suffice it to say that neither politicians nor the voters who back them appreciate the suggestion that the opinions they hold are motivated, even in part, by social and psychological factors that are probably outside of their awareness. American society and many others have yet to find a way of allowing the facts — scientific and otherwise — to trump special interests, political posturing, and motivated reasoning when it comes to the development of public policy. But that doesn’t mean we should stop trying.

References and Further Reading:

Carroll, J. (2007). Public: Iraq war still top priority for President and Congress. Gallup Poll. Retrieved April 9, 2007, from http://www.galluppoll.com/content/?ci=27103&pg=1

Feinberg, M., & Willer, R. (2011). Apocalypse soon? Dire messages reduce belief in global warming by contradicting just world beliefs. Psychological Science, 22, 34–38.

Feygina, I., Jost, J. T., & Goldsmith, R. (2010). System justification, the denial of global warming, and the possibility of “system-sanctioned change.” Personality and Social Psychology Bulletin. 36, 326–338.

Hennes, E. P., Feygina, I., & Jost, J. T. (2011). Motivated evaluation, recall, and tactile perception in the service of the system: The case of anthropogenic climate change. Paper presented at the Princeton University Conference on Psychology and Policymaking, Princeton, NJ.

Jones, J. M. (2010). Conservatives’ doubts about global warming grow. Gallup Poll. Retrieved August 14, 2012, from http://www.gallup.com/poll/126563/conservatives-doubts-global-warming-grow.aspx

Jost, J. T., Banaji, M. R., Nosek, B. A. (2004). A decade of system justification theory: Accumulated evidence of conscious and unconscious bolstering of the status quo. Political Psychology, 25, 881–919.

Jost, J. T., & Thompson, E. P. (2000). Group-based dominance and opposition to equality as independent predictors of self-esteem, ethnocentrism, and social policy attitudes among African Americans and European Americans. Journal of Experimental Social Psychology, 36, 209–232.

Kay, A. C., & Jost, J. T. (2003). Complementary justice: Effects of “poor but happy” and “poor but honest” stereotype exemplars on system justification and implicit activation of the justice motive. Journal of Personality and Social Psychology, 85, 823–837.

McCright, A. M., & Dunlap, R. E. (2011). Cool dudes: The denial of climate change among conservative white males in the United States. Global Environmental Change, 21, 1163–1172.

Muller, R. A. (2012, July 30). The conversion of a climate-change skeptic. New York Times, p. A19.

Newport, F. (2012). Amercans’ worries about global warming up slightly. Gallup Poll. Retrieved January 28, 2013, from http://www.gallup.com/poll/153653/Americans-Worries-Global-Warming-Slightly.aspx

Obama, B. (2012). State of the union address. Retrieved March 6, 2013, from http://www.nytimes.com/2013/02/13/us/politics/obamas-2013-state-of-the-union-address.html?pagewanted=1&_r=2

Saad, L. (2012). In U.S., global warming views steady despite warm winter. Gallup Poll. Retrieved January 28, 2013, from http://www.gallup.com/poll/153608/Global-Warming-Views-Steady-Despite-Warm-Winter.aspx

Shepherd, S., & Kay, A. C. (2012). On the perpetuation of ignorance: System dependence, system justification, and the motivated avoidance of sociopolitical information. Journal of Personality and Social Psychology, 102, 264–80.

Related Situationist posts:

Posted in Environment, Ideology, Politics, Public Policy, Situationist Contributors, Social Psychology, System Legitimacy | 2 Comments »

Frontier Tort – Selling Beer in Whiteclay

Posted by The Situationist Staff on April 15, 2013

Alcoholism Cover Small

At Harvard Law School in the fall of 2012, the 80 students in Professor Hanson’s situationist-orient torts class participated in an experimental group project in their first-year torts class. The project required students to research, discuss, and write a white paper about a current policy problem for which tort law (or some form of civil liability) might provide a partial solution.  Their projects, presentations, and white papers were informed significantly by the mind sciences. You can read more about those projects, view the presentations, and download the white papers at the Frontier Torts website.

One of the group projects involved the sale of alcohol to members of the Oglala Sioux in Whiteclay Nebraska outside the Pine Ridge Indian Reservation. Here’s the Executive Summary of the white paper.

Native American Alcoholism: A Frontier Tort

Executive Summary


Since its introduction into Native American communities by European colonists, alcohol has plagued the members of many tribes to a disastrous extent. The Oglala Sioux of Pine Ridge have especially suffered from alcoholism, enabled and encouraged by liquor stores just outside the reservation’s borders. Despite the complexities of this situation, media outlets have often reduced it to a pitiable image of dirty, poor Native Americans, degraded by the white man’s vice.

Upon further analysis, however, it becomes evident that there are a variety of factors influencing the situation of Native American alcoholism. While neurobiological, psychological, and genetic factors are often thought to offer plausible internal situational explanations as to why Native Americans suffer so much more potently from this disease than the rest of the nation, high levels of poverty in Native American communities, a traumatic and violent history, and informational issues compound as external situational factors that exacerbate the problem.

Unfortunately, the three major stakeholders in this situation (the alcohol industry, the State of Nebraska, and the Native Americans) have conflicting interests, tactics, and attribution modes that clash significantly in ways that have prevented any meaningful resolution from being reached. However, there are a variety of federal, state, and tribal programs and initiatives that could potentially resolve this issue in a practical way, so long as all key players agree to participate in a meaningful, collaborative effort.

The key to implementation of these policy actions is determining who should bear the costs they require: society as a whole through the traditional federal taxes, the alcohol companies through tort litigation, or the individuals who purchase the alcohol through an alcohol sales tax. Ultimately, an economic analysis leads to the conclusion that liability should be placed upon the alcohol companies and tort litigation damages should fund the suggested policy initiatives.

You can watch the related presentations and download the white paper here.

Related Situationist posts:

Posted in Deep Capture, Food and Drug Law, History, Marketing, Morality, Neuroscience, Politics, Situationist Contributors | Leave a Comment »

Blind Spot

Posted by The Situationist Staff on March 3, 2013

Blind Spot Book Cover

From the Harvard Gazette (an article about Situationist Contributor Mahzarin Banaji’s extroardinary new book, co-authored with Anthony Greenwald):

Mahzarin Banaji shouldn’t have been biased against women. A leading social psychologist — who rose from unlikely circumstances in her native India, where she once dreamed of becoming a secretary — she knew better than most that women were just as cut out for the working world as men.

Then Banaji sat down to take a test. Names of men and women and words associated with “career” and “family” flashed across the computer screen, one after the other. As she tried to sort the words into groups as instructed, she found that she was much faster and more accurate when asked to lump the male names with job-oriented words. It wasn’t what a pathbreaking female scientist would have expected, or hoped, to see.

“I thought to myself: Something is wrong with this damned test,” said Banaji, Harvard’s Richard Clarke Cabot Professor of Social Ethics, as she reflected during an interview in her William James Hall office on her first run-in with an Implicit Association Test (I.A.T.).

That Banaji specializes in creating just these kinds of assessments did nothing to change the results. But at least she can take comfort in knowing she’s not alone. In the past 15 years, more than 14 million such tests have been taken at Project Implicit, the website of Banaji and her longtime collaborator Anthony Greenwald.

What these curious test-takers, as well as Banaji and Greenwald, found was that many of us hold onto quite a bit of unconscious bias against all sorts of groups, no matter how unprejudiced we strive to be in our actions and conscious thoughts. It’s a counterintuitive, even unnerving proposition, and one that Banaji and Greenwald, a professor of psychology at the University of Washington, set out to explain for a lay audience in “Blindspot: Hidden Biases of Good People.”

“This test can get under your skin in some ways,” Banaji said. “There is something annoying about us coming around and telling these good people that something may be less good here.”

But if most of us want to be good — to match our actions to our best intentions — our brains sometimes have other ideas. Just as the human eye has a blind spot in its field of vision, they write, our unconscious minds can contain hidden biases (often toward groups of which we are not a member or with which we are less familiar) that can guide our behavior. Banaji and Greenwald have devoted three decades to developing scientifically sound ways to uncover those biases.

“I’m not a good theoretician. I don’t have great ideas about how minds work or how people behave,” Banaji said, laughing. “But maybe as a result, I’ve focused a lot more on the development and understanding of a method that, if wielded appropriately, would produce evidence that would have to change our minds.”

In “Blindspot,” she and Greenwald offer people tools to overcome their hardwired biases, and to stoke conversation about the deeply ingrained, very human tendency toward bias in a country that prides itself on egalitarian values.

Banaji met Greenwald in the early 1980s at Ohio State University, where Banaji had enrolled to in a Ph.D. program almost on a whim, after picking up a cheap copy of the “Handbook of Social Psychology” in India. The subject seemed to meld science and philosophy, she said.

Until then, “I had no clue that it really was possible to conduct an honest-to-goodness experiment on human nature.”

Greenwald became her graduate adviser and, after she accepted a position at Yale, her collaborator. For years, the two worked together on a number of papers, largely by email. (“Neither of us likes talking on the phone,” Banaji said.)

They developed the I.A.T. in the 1990s at Yale with the help of Brian Nosek, then a graduate student of Banaji and now a professor at the University of Virginia. The simple tests can be taken in roughly 10 minutes and can be modified to assess unconscious bias in different categories, for example, whether white test-takers are likelier to associate “good” words with white faces more quickly than with black faces. (They are, and black test-takers show the reverse results.)

At the time, few psychology studies were conducted online. When the team members launched Project Implicit in 1998, Banaji hoped to garner 500 responses in the first year. With no advertising, they hit 45,000 in the first month. A flood of media attention followed, as did professional controversy.

Many critics were upset by the social implications of learning that humans may be unconscious unegalitarians, Banaji said. “But it’s been great for us to have the criticism. It has led to the work moving much faster. The standards the I.A.T. has been held to have been higher than anything I have seen.”

Banaji is quick to point out that an I.A.T. isn’t meant to shame people. If a patient went to a doctor and took a blood pressure test (which, she adds, is about as reliable as the I.A.T.), and was told he had hypertension, he wouldn’t beat himself up for not having detected it himself. Rather, he’d ask what he steps he could take to improve the situation.

“If somebody asked me what my kidneys are doing right now, I would have no idea,” she said. “Yet, we really do believe that we pretty much know what goes on in our heads. And that’s because we do have access to a piece of it called the conscious mind, and that wrongly gives us the feeling that we know all of it.”

Overcoming our biases, even the unconscious ones we can’t tell are influencing our actions, isn’t about striving for political correctness. In a globalized world, the tools of our primitive brains — the tendency to associate “the Other” with a threat, for instance — can often hold us back.

“When our ancestors met someone who was different from them, their first thought was probably: Are they going to kill me before I can kill them?” Banaji said. “Today, when we see someone who’s totally different from us, we have to ask: Can we outsource to them? Can we collaborate with them? Can we forge a relationship with them and beat somebody who’s genetically just like us? That’s a tall order!”

Though the idea of implicit bias has captured the public’s attention for more than a decade, Greenwald and Banaji did not conceive of a book on the topic until 2004, when both spent a year as fellows at the Radcliffe Institute for Advanced Study, where Banaji had taken a faculty appointment in 2002. Free from their normal academic obligations and once again in the same town, they began to work on “Blindspot.”

The ideas in “Blindspot” will hardly incite debate among psychologists at this point, Banaji said. Rather, she and Greenwald wrote the book in response to the many requests they received to speak to groups of physicians, business executives, lawyers, and other private-sector professionals who saw how ignoring their unconscious biases — in hiring the best candidates, treating patients of all ages and races, selecting witnesses and jury members — could hurt their bottom line.

Twenty years ago, when Banaji asked her intro-psych students whether they held any biases, 95 percent would say no. Now that number is about one-fifth, she said.

“This recognition that we have failings is, I think, a much more accepted idea — which is why I think the book is not going to be controversial,” Banaji said.

Of course, she added, that’s what happens to many once-incendiary ideas. “They’re criticized; people say they can’t be true. And then over time it becomes common sense. While we’re not quite at the common sense stage, I do think we’re getting there.”

To visit the Project Implicit website and find out more about implicit associations, click here. To review many previous Situationist posts discussing implicit associations click on the “Implicit Associations” category in the right margin or, for a list of such posts, click here.

Posted in Book, Ideology, Implicit Associations, Situationist Contributors | Leave a Comment »