The Situationist

Archive for the ‘Morality’ Category

Morality and Politics: A System Justification Perspective

Posted by The Situationist Staff on March 5, 2015

capital buildingAn Interview with John Jost by Paul Rosenberg

Note: This interview was originally published on Salon.com with an outrageously incendiary title that entirely misrepresented its content.

Introduction by Paul Rosenberg:

In the immediate aftermath of World War II, a wide range of thinkers, both secular and religious, struggled to make sense of the profound evil of war, particularly Nazi Germany and the Holocaust. One such effort, “The Authoritarian Personality” by Theodore Adorno and three co-authors, opened up a whole new field of political psychology—initially a small niche within the broader field of social psychology—which developed fitfully over the years, but became an increasingly robust subject area in 1980s and 90s, fleshing out a number of distinct areas of cognitive processing in which liberals and conservatives differed from one another. Liberal/conservative differences were not the sole concern of this field, but they did appear repeatedly across a growing range of different sorts of measures, including the inclination to justify the existing social order, whatever it might be, an insight developed by John Jost, starting in the 1990s, under the rubric of “system justification theory.”

The field of political psychology gained increased visibility in the 2000s as conservative Republicans controlled the White House and Congress simultaneously for the first time since the Great Depression, and took the nation in an increasingly divisive direction. Most notably, John Dean’s 2006 bestseller, “Conservatives Without Conscience,” popularized two of the more striking developments of the 1980s and 90s, the constructs of rightwing authoritarianism and social dominance orientation. A few years before that, a purely academic paper, “Political Conservatism as Motivated Social Cognition,” by Jost and three other prominent researchers in the field, caused a brief spasm of political reaction which led some in Congress to talk of defunding the entire field.

But as the Bush era ended, and Barack Obama’s rhetoric of transcending right/left differences captured the national imagination, an echo of sentiment appeared in the field of political psychology as well. Known as “moral foundations theory,” and most closely associated with psychologist Jonathan Haidt, and popularized in his book “The Righteous Mind,” it argued that a too-narrow focus on concerns of fairness and care/harm avoidance had diminished researchers’ appreciation for the full range of moral concerns, especially a particular subset of distinct concerns which conservatives appear to value more than liberals do. In order to restore balance to the field, researchers must broaden their horizons—and even, Haidt argued, engage in affirmative action to recruit conservatives into the field of political psychology. This was, in effect, an argument invoking liberal values—fairness, inclusion, openness to new ideas, etc.—and using them to criticize or even attack what was characterized as a liberal orthodoxy, or even a church-like, close-minded tribal moral community.

Yet, to some, these arguments seemed to gloss over, or even just outright dismiss a wide body of data, not dogma, from decades of previous research. While people were willing to consider new information, and new perspectives, there was a reluctance to throw out the baby with the bathwater, as it were. In the most nitty-gritty sense, the question came down to this: Was the rhetorical framing of the moral foundations argument actually congruent with the detailed empirical findings in the field? Or did it serve more to blur important distinctions that were solidly grounded in rigorous observation?

Recently, a number of studies have raised questions about moral foundations theory in precisely these terms—are the moral foundations more congenial to conservatives actually reflective of non-moral or even immoral tendencies which have already been extensively studied? Late last year, a paper co-authored by Jost—“Another Look At Moral Foundations Theory”—built on these earlier studies to make the strongest case yet along these lines. To gain a better understanding of the field as a whole, moral foundations theory as a challenge within it, the problems that theory is now confronting, and what sort of resolution—and new frontiers—may lie ahead for the field, Paul Rosenberg spoke with John Jost. In the end, he suggested, moral foundations theory and system justification theory may end up looking surprisingly similar to one another, rather than being radically at odds.

PR: You’re most known for your work developing system justification theory, followed by your broader work on developing an integrated account of political ideology. You recently co-authored a paper “Another Look at Moral Foundations Theory,” which I want to focus on, but in order to do so coherently, I thought it best to begin by first asking you about your own work, and that of others you’ve helped integrate, before turning to moral foundations theory generally, and this critical paper in particular.

So, with that in mind as a game plan, could you briefly explain what system justification theory is all about, how it was that you became interested in the subject matter, and why others should be interested in it as well?

JJ: When I was a graduate student in social psychology at Yale back in the 1990’s I began to wonder about a set of seemingly unrelated phenomena that were all counterintuitive in some way and in need of explanation. So I asked: Why do people stay in abusive relationships, why do women feel that they are entitled to lower salaries than men, and why do African American children come to think that white dolls are more attractive and desirable? Why do people blame victims of injustice and why do victims of injustice sometimes blame themselves? Why is it so difficult for unions and other organizations to get people to stand up for themselves, and why do we find personal and social change to be so difficult, even painful? Of course, not everyone exhibits these patterns of behavior at all times, but many people do, and it seemed to me that these phenomena were not well explained by existing theories in social science.

And so it occurred to me that there might be a common denominator—at the level of social psychology—in these seemingly disparate situations. Perhaps human beings are in some fairly subtle way prone to accept, defend, justify, and rationalize existing social arrangements and to resist attempts to change the status quo, however well-meaning those attempts may be. In other words, we may be motivated, to varying degrees, to justify the social systems on which we depend, to see them as relatively good, fair, legitimate, desirable, and so on.

This did not strike me as implausible, given that social psychologists had already demonstrated that we are often motivated to defend and justify ourselves and the social groups to which we belong. Most of us believe that we are better drivers than the average person and more fair, too, and many of us believe that our schools or sports teams or companies are better than their rivals and competitors. Why should we not also want to believe that the social, economic, and political institutions that are familiar to us are, all things considered, better than the alternatives? To believe otherwise is at least somewhat painful, insofar it would force us to confront the possibility that our lives and those of others around us may be subject to capriciousness, exploitation, discrimination, injustice, and that things could be different, better—but they are not.

In 2003, a paper you co-authored, “Political Conservatism as Motivated Social Cognition” caused quite a stir politically—there were even brief rumblings in Congress to cut off all research funding, not just for you, but for an entire broad field of research, though you managed to quell those rumblings in a subsequent Washington Post op-ed. That paper might well be called the tip of the iceberg of a whole body of work you’ve helped draw together, and continued to work on since then. So, first of all, what was that paper about?

We wanted to understand the relationship, if any, between psychological conservatism—the mental forces that contribute to resistance to change—and political conservatism as an ideology or a social movement. My colleagues and I conducted a quantitative, meta-analytic review of nearly fifty years of research conducted in 12 different countries and involving over 22,000 research participants or individual cases. We found 88 studies that had investigated correlations between personality characteristics and various psychological needs, motives, and tendencies, on one hand, and political attitudes and opinions, on the other.

And what did it show?

We found pretty clear and consistent correlations between psychological motives to reduce and manage uncertainty and threat—as measured with standard psychometric scales used to gauge personal needs for order, structure, and closure, intolerance of ambiguity, cognitive simplicity vs. complexity, death anxiety, perceptions of a dangerous world, etc.—and identification with and endorsement of politically conservative (vs. liberal) opinions, leaders, parties, and policies.

How did politicians misunderstand the paper, and how did you respond?

I suspect that there were some honest misunderstandings as well as some other kinds. One issue is that many people seem to assume that whatever psychologists are studying must be considered (by the researchers, at least) as abnormal or pathological. But that is simply untrue. Social, cognitive, developmental, personality, and political psychologists are all far more likely to study attitudes and behaviors that are normal, ordinary, and mundane. We are primarily interested in understanding the dynamics of everyday life. In any case, none of the variables that my colleagues and I investigated had anything to do with psychopathology; we were looking at variability in normal ranges within the population and whether specific psychological characteristics were correlated with political opinions. We tried to point some of these things out, encouraging people to read beyond the title, and emphasizing that there are advantages as well as disadvantages to being high vs. low on the need for cognitive closure, cognitive complexity, sensitivity to threat, and so on.

How has that paper been built on since?

I am gratified and amazed at how many research teams all over the world have taken our ideas and refined, extended, and otherwise built upon them over the last decade. To begin with, a number of studies have confirmed that political conservatism and right-wing orientation are associated with various measures of system justification. And public opinion research involving nationally representative samples from all over the world establishes that the two core value dimensions that we proposed to separate the right from the left—traditionalism (or resistance to change) and acceptance of inequality—are indeed correlated with one another, and they are generally (but not always) associated with system justification, conservatism, and right-wing orientation.

Since 2003, numerous studies have replicated the correlations we observed between epistemic motives, including personal needs for order, structure, and closure and resistance to change, acceptance of inequality, system justification, conservatism, and right-wing orientation. Several find that liberals score higher than conservatives on the need for cognition, which captures the individual’s chronic tendency to enjoy effortful forms of thinking. This finding is potentially important because individuals who score lower on the need for cognition favor quick, intuitive, heuristic processing of new information, whereas those who score higher are more likely to engage in more elaborate, systematic processing (what Daniel Kahneman refers to as System 1 and System 2 thinking, respectively). The relationship between epistemic motivation and political orientation has also been explored in research on nonverbal behavior and neurocognitive structure and functioning.

Various labs have also replicated the correlations we observed between existential motives, including attention and sensitivity to dangerous and threatening stimuli, and resistance to change, acceptance of inequality, and conservatism. Ingenious experiments have demonstrated that temporary activation of epistemic needs to reduce uncertainty or to attain a sense of control or closure increases the appeal of system justification, conservatism, and right-wing orientation. Experiments have demonstrated that temporary activation of existential needs to manage threat and anxiety likewise increases the appeal of system justification, conservatism, and right-wing orientation, all other things being equal. These experiments are especially valuable because they identify causal relationships between psychological motives and political orientation.

Progress has also been made in understanding connections between personality characteristics and political orientation. In terms of “Big Five” personality traits, studies involving students and nationally representative samples of adults tell exactly the same story: Openness to new experiences is positively associated with a liberal orientation, whereas Conscientiousness (especially the need for order) is positively associated with conservative orientation. In a few longitudinal studies, childhood measures of intolerance of ambiguity, uncertainty, and complexity as well as sensitivity to fear, threat, and danger have been found to predict conservative orientation later in life. Finally, we have observed that throughout North America and Western Europe, conservatives report being happier and more satisfied than liberals, and this difference is partially (but not completely) explained by system justification and the acceptance of inequality as legitimate. As we suspected many years ago, there appears to be an emotional or hedonic cost to seeing the system as unjust and in need of significant change.

“Moral foundations theory” has gotten a lot of popular press, as well as serious attention in the research community, but for those not familiar with it, could you give us a brief description, and then say something about why it is problematic on its face (particularly in light of the research discussed above)?

The basic idea is that there are five or six innate (evolutionarily prepared) bases for human “moral” judgment and behavior, namely fairness (which moral foundations theorists understand largely in terms of reciprocity), avoidance of harm, ingroup loyalty, obedience to authority, and the enforcement of purity standards. My main problem is that sometimes moral foundations theorists write descriptively as if these are purely subjective considerations—that people think and act as if morality requires us to obey authority, be loyal to the group, and so on. I have no problem with that descriptive claim—although this is surely only a small subset of the things that people might think are morally relevant—as long as we acknowledge that people could be wrong when they think and act as if these are inherently moral considerations.

At other times, however, moral foundations theorists write prescriptively, as if these “foundations” should be given equal weight, objectively speaking, that all of them should be considered virtues, and that anyone who rejects any of them is ignoring an important part of what it means to be a moral human being. I and others have pointed out that many of the worst atrocities in human history have been committed not merely in the name of group loyalty, obedience to authority, and the enforcement of purity standards, but because of a faithful application of these principles. For 24 centuries, Western philosophers have concluded that treating people fairly and minimizing harm should, when it comes to morality, trump group loyalty, deference to authority, and purification. In many cases, behaving ethically requires impartiality and disobedience and the overcoming of gut-level reactions that may lead us toward nepotism, deference, and acting on the basis of disgust and other emotional intuitions. It may be difficult to overcome these things, but isn’t this what morality requires of us?

There have been a number of initial critical studies published, which you cite in this new paper. What have they shown?

Part of the problem is that moral foundations theorists framed their work, for rhetorical purposes, in strong contrast to other research in social and political psychology, including work that I’ve been associated with. But this was unnecessary from the start and, in retrospect, entirely misleading. They basically said: “Past work suggests that conservatism is motivated by psychological needs to reduce uncertainty and threat and that it is associated with authoritarianism and social dominance, but we say that it is motivated by genuinely moral—not immoral or amoral—concerns for group loyalty, obedience to authority, and purity.” This has turned out to be a false juxtaposition on many levels.

First researchers in England and the Netherlands demonstrated that threat sensitivity is in fact associated with group loyalty, obedience to authority, and purity. For instance, perceptions of a dangerous world predict the endorsement of these three values, but not the endorsement of fairness or harm avoidance. Second, a few research teams in the U.S. and New Zealand discovered that authoritarianism and social dominance orientation were positively associated with the moral valuation of ingroup, authority, and purity but not with the valuation of fairness and avoidance of harm. Psychologically speaking, the three so-called “binding foundations” look quite different from the two more humanistic ones.

What haven’t these earlier studies tackled that you wanted to address? And why was this important?

These other studies suggested that there was a reasonably close connection between authoritarianism and the endorsement of ingroup, authority, and purity concerns, but they did not investigate the possibility that individual differences in authoritarianism and social dominance orientation could explain, in a statistical sense, why conservatives value ingroup, authority, and purity significantly more than liberals do and—just as important, but often glossed over in the literature on moral foundations theory—why liberals value fairness and the avoidance of harm significantly more than conservatives do.

How did you go about tackling these unanswered questions? What did you find and how did it compare with what you might have expected?

There was a graduate student named Matthew Kugler (who was then studying at Princeton) who attended a friendly debate about moral foundations theory that I participated in and, after hearing my remarks, decided to see whether the differences between liberals and conservatives in terms of moral intuitions would disappear after statistically adjusting for authoritarianism and social dominance orientation. He conducted a few studies and found that it did, and then he contacted me, and we ended up collaborating on this research, collecting additional data using newer measures developed by moral foundations theorists as well as measures of outgroup hostility.

What does it mean for moral foundations theory?

To me, it means that scholars may need to clean up some of the conceptual confusion in this area of moral psychology, and researchers need to face up to the fact that some moral intuitions (things that people may think are morally relevant and may use as a basis for judging others) may lead people to behave in an unethical, discriminatory manner. But we need behavioral research, such as studies of actual discrimination, to see if this is actually the case. So far the evidence is mainly circumstantial.

And what future research is to come along these lines from you?

One of my students decided to investigate the relationship between system justification and its motivational antecedents, on one hand, and the endorsement of moral foundations, on the other. This work also suggests that the rhetorical contrast between moral foundations theory and other research in social psychology was exaggerated. We are finding that, of the variables we have included, empathy is the best psychological predictor of endorsing fairness and the avoidance of harm as moral concerns, whereas the endorsement of group loyalty, obedience to authority, and purity concerns is indeed linked to epistemic motives to reduce uncertainty (such as the need for cognitive closure) and existential motives to reduce threat (such as death anxiety) and to system justification in the economic domain. So, at a descriptive level, moral foundations theory is entirely consistent with system justification theory.

Finally, I’ve only asked some selective questions, and I’d like to conclude by asking what I always ask in interviews like this—What’s the most important question that I didn’t ask? And what’s the answer to it?

Do I think that social science can help to address some of the problems we face as a society? Yes, I am holding out hope that it can, at least in the long run, and hoping that our leaders will come to realize this eventually.

Our conversation leads me to want to add one more question. Haidt’s basic argument could be characterized as a combination of anthropology–look at all the “moral principles” different cultures have advanced—and the broad equation of morality with the restraint of individual self-interest and/or desire. Your paper, bringing to attention the roles of SDO and RWA, throws into sharp relief a key problem with such a formulation—one that Southern elites have understood for centuries: wholly legitimate individual self-interest (and even morality—adequately feeding & providing a decent future for one’s children, for example) can be easily over-ridden by appeals to heinous “moral concerns,” such as “racial purity,” or more broadly, upholding the “God-given racial order.”

Yet, Haidt does seem to have an important point that individualist moral concern leave something unsaid about the value of the social dimension of human experience, which earlier moral traditions have addressed. Do you see any way forward toward developing a more nuanced account of morality that benefits from the criticism that harm-avoidance and fairness may be too narrow a foundation without embracing the sorts of problematic alternatives put forward so far?

Yes, and there is long tradition of theory and research on social justice—going all the way back to Aristotle—that involves a rich, complex, nuanced analysis of ethical dilemmas that goes well beyond the assumption that fairness is simply about positive and negative reciprocity.

Without question, we are a social species with relational needs and dependencies, and how we treat other people is fundamental to human life, especially when it comes to our capacity for cooperation and social organization. When we are not engaging in some form of rationalization, there are clearly recognizable standards of procedural justice, distributive justice, interactional justice, and so on. Even within the domain of distributive justice—which has to do with the allocation of benefits and burdens in society—there are distinct principles of equity, equality, and need, and in some situations these principles may be in conflict or contradiction.

How to reconcile or integrate these various principles in theory and practice is no simple matter, and this, it seems to me, is what we should focus on working out. We should also focus on solving other dilemmas, such as how to integrate utilitarian, deontological, virtue-theoretical, and social contractualist forms of moral reasoning, because each of these—in my view—has some legitimate claim on our attention as moral agents.

Related Situationist posts:

To review the full collection of Situationist posts related to system justification, click here.

Posted in Ideology, Morality, Situationist Contributors, Social Psychology, System Legitimacy | 3 Comments »

Paul Bloom on the Situational Effects of Religion

Posted by The Situationist Staff on December 3, 2013

Paul Bloom, Professor of Psychology and Cognitive Science at Yale University and contributing author of the 2012 Annual Review of Psychology, talks about his article “Religion, Morality, Evolution.” How did religion evolve? What effect does religion have on our moral beliefs and moral actions? These questions are related, as some scholars propose that religion has evolved to enhance altruistic behavior toward members of one’s group. But, Bloom argues, while religion has powerfully good moral effects and powerfully bad moral effects, these are due to aspects of religion that are shared by other human practices. There is surprisingly little evidence for a moral effect of specifically religious beliefs.

Find the article here.  

Related Situationist posts:

Posted in Altruism, Conflict, Ideology, Morality | 2 Comments »

Mahzarin Banaji on “Group Love”

Posted by The Situationist Staff on November 17, 2013

SillimanLecturePoster

From Yale News (by Phoebe Kimmelman):

On Thursday evening, Harvard psychologist Mahzarin Banaji delivered a talk entitled “Group Love” where she demonstrated that the audience held an implicit bias for Yale over Princeton.

Banaji, who worked as a professor of psychology at Yale from 1986-2002 before taking a similar post at Harvard, focused in her talk on how group affiliations, or lack thereof, affect the ways in which we see the world and interact with others. In her research, Banaji has helped bring Freudian theories of the subconscious in the psychology laboratory to be empirically tested.

University President Peter Salovey delivered introductory remarks, saying Banaji had been the “heart and soul” of the Yale psychology department during her 16 years there.

“She is of those scientists who changes her field with her insights and her empirical data with a deep sense of social responsibility to her colleagues, her students and her field,” Salovey said.

In the lecture to roughly 100 people, Banaji first discussed an experiment she did in 2006 at Harvard that involved monitoring participants’ brain activity while they answered random questions about two hypothetical people, presented with only their political preferences. Neuroimaging showed that the subjects used different areas of the brain to make predictions about people with whom they agree and those with whom they disagree. Banaji used this study to introduce the idea of love of the in-group, a preference people have for a group of people who think the way that they themselves do.

Through presenting multiple studies, Banaji demonstrated the magnitude of positive bias towards the in-group in subjects ranging from sports fans to elementary school students. While we may not be able to eliminate our biases, Banaji said certain cognitive strategies can “outsmart” them. For instance, Banaji said she rotates among her computer screensavers images that defy racial and gender stereotypes.

“It’s not that we hate people of another group, but it’s love for the in-group that’s paramount,” she said.

Salovey and Banaji, who started as faculty at Yale on the very same day, were close friends and next door neighbors, he said. Salovey recalled that he and Banaji were each other’s “support systems” while writing PSYC 110 lectures together.

Banaji came to campus for this year’s Silliman Memorial Lecture, an annual speakership that began in 1888 and has brought such prominent scientific figures to campus as J.J. Thomson and Ernest Rutherford. Though a committee of faculty from Yale science departments usually chooses a speaker whose research is in the hard natural sciences, committee chair and Sterling professor of molecular biophysics and biochemistry Joan Steitz said that her colleagues were eager to hear from Banaji this year. Though the lecture has no affiliation with Silliman College, the endowment is named for the mother of Benjamin Silliman, a scientist after whom the college is named.

“If you think about the impact that psychology and neurobiology and brain science [are] having these days, the committee did not consider it at all inappropriate to be going in that direction with this particular lecture,” Steitz said.

Since leaving Yale in 2002, Banaji has served as a professor of social ethics in Harvard’s psychology department, where she has continued her research on how unconscious thinking plays out in social situations.

Nick Friedlander ’17 said he found the lecture “eye-opening” because it revealed biases he did not know he held before.

For Zachary Williams ’17, the lecture demonstrated how little of the conscious mind controls mental processes.

“It was truly a treat to be able to sit in close quarters with such a fantastic paragon of academia and hear her talk about such relevant topics,” he said.

Banaji’s most recent book is entitled “Blindspot: Hidden Biases of Good People.”

Related Situationist posts:

Posted in Emotions, Implicit Associations, Morality, Neuroscience, Situationist Contributors | 1 Comment »

Legal theory must incorporate discoveries from biology and behavioral sciences

Posted by Fábio Portela on October 15, 2013

Some recent discoveries in evolutionary biology, ethology, neurology, cognitive psychology and behavioral economics impels us to rethink the very foundations of law if we want to answer many questions remain unanswered in legal theory. Where does our ability to interpret rules and think in terms of fairness in relation to others come from? Does the ability to reason about norms derive from certain aspects of our innate rationality and from mechanisms that were sculptured in our moral psychology by evolutionary processes?

Legal theory must take the complexity of the human mind into account

Any answer to these foundational issues demands us to take into consideration what these other sciences are discovering about how we behave. For instance, ethology has shown that many moral behaviors we usually think that are uniquely displayed by our species have been identified in other species as well.

Please watch this video, a lecture by primatologist Frans de Waal for the TED Talks :

The skills needed to feel empathy, to engage in mutual cooperation, to react to certain injustices, to form coalitions, to share, to punish those who refuse to comply with expected behaviors, among many others – abilities once considered to be exclusive of humans – have been observed in other animals. These traits have been observed in many animal species, especially those closer to our evolutionary lineage, as the great apes. In the human case, these instinctive elements are also present. Even small children around the age of one year old show great capacity for moral cognition. They know to identify patterns of relationships in distributive justice, even if they cannot explain why they came to a certain conclusion (because they even do not know how to speak by that age!).

In addition, several studies have shown that certain neural connections in our brains are actively involved in processing information related to capabilities typical of normative behavior. Think about the ability to empathize, for example. It is an essential skill that prevents us to see other people as things or means. Empathy is needed to respect the Kantian categorical imperative to treat the others as an end in themselves, and not means to achieve other ends. This is something many psychopaths can’t do, because they face severe reduction in their ability to empathize with others. Several researches using fMRI have shown year after year that many diagnosed psychopaths show deficiencies in areas of their brains that have been associated to empathy.

If this sounds like science fiction, please consider the following cases.

A 40 year old man, who had hitherto displayed absolutely normal sexual behavior, was kicked out by his wife after she discovered what he was visiting child porn sites and had even tried to sexually molest children. He was arrested and the judge determined that he would have to pass through a sexaholics rehabilitation program or face jail. But he soon got expelled from the program after inviting women at the program to have sex with him. Just before being arrested again for failing in the program, he felt a severe headache and went to a hospital, where he was submitted to an MRI exam. The doctors identified a tumor on his orbifrontal cortex, a brain region usually associated with training of moral judgment, impulse control and regulation of social behavior. After the removal of the tumor, his behavior returned to normal. Seven months later, he once more showed deviant behavior – and further tests showed the reappearance of the tumor. After the removal of the new cyst, his sexual behavior again returned to normal standards.

You could also consider the case of Charles Whitman. Until he was 24, he had been a reasonably normal person. However, on August 1st, 1966, he ascended to the top of the Tower of the University of Texas, where, armed to the teeth, he killed 13 people and wounded 32 before being killed by the police. Later it was discovered that just before the mass killings, he had also murdered both his wife and mother. During the previous day, he left a typewritten letter in which one could read the following:

“I do not quite understand what it is that compels me to type this letter. Perhaps it is to leave some vague reason for the actions I have recently performed. I do not really understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I cannot recall when it started) I have been a victim of many unusual and irrational thoughts.”

In the letter, he also requested to be submitted to an autopsy after his death in order to verify if it there was something wrong with his brain.  Whitman’s brain was examined and … surprise! … the doctors found a glioblastoma tumor compressing the region of his amygdala, which is associated with the regulation of aggression and fear.

What does this mean for legal theory? At least this means that law, so far, has been based on a false metaphysical conception that t brain is a lockean blank slate and that our actions derive from our rational dispositions. Criminal law theory assumes that an offender breaks the law exclusively due to his free will and reasoning. Private law assumes that people sign contracts only after considering all its possible legal effects and are fully conscious about the reasons that motivated them to do so. Constitutional theory assumes that everyone is endowed with a rational disposition that enables the free exercise of civil and constitutional rights such as freedom of expression or freedom of religion. It is not in question that we are able to exercise such rights. But these examples show  that the capacity to interpret norms and to act accordingly to the law does not derive from a blank slate endowed with free will and rationality, but from a complex mind that evolved in our hominin lineage and that relies on brain structures that enables us to reason and choose among alternatives.

This means that our rationality is not perfect. It is not only affected by tumors, but also by various cognitive biases that affect the rationality of our decisions. Since the 1970s, psychologists have studied these biases. Daniel Kahneman, for example, won the 2002 Nobel prize in Economic Sciences for his research on the impact of these biases on decision-making. We can make really irrational decisions because our mind is based on certain heuristics (fast-and-frugal rules) to evaluate certain situations. In most situations, these heuristics help us to make the right decisions, but they also may influence us to make really dumb mistakes.

There are dozens of heuristics that structure our rationality. We are terrible on assessing the significance of statistical correlations, we discard unfavorable evidence, we tend to follow the most common behavior in our group (herd effect), and we tend to see past events as if they had been easily predictable. We are inclined to cooperate with whom is part of our group (parochialist bias), but not so with whom belongs to another group. And those are just some of the biases that have been already identified.

It is really hard to overcome these biases, because they are much of what we call rationality. These flaws are an unavoidable part of our rationality. Sure, with some effort, we can avoid many mistakes by using some techniques that could lead us to get unbiased and correct answers. However, using artificial techniques to do so may be expensive and demands lots of effort. We can use a computer and train mathematical skills in order to overcome biases that causes error in statistical evaluation, for instance. But how can we use a computer to reason about morality or legal issues “getting around” these psychological biases? Probably, we can’t.

The best we can do is to reconsider the psychological assumptions of legal theory, by taking into account what we actually know about our psychology and how it affects our judgement. And there is evidence that these biases really influence how judges evaluate judicial cases. For instance, a research done by Birte Englich, Thomas Mussweiler and Fritz Strack concluded that even legal experts are indeed affected by cognitive biases. More specifically, they studied the effects of anchoring bias in judicial activity, by submitting 52 legal experts to the following experiment: they required them to examine an hypothetical court case, which should determine the sentence in a fictitious shoplifting case. After reading the materials, the participants had to answer a questionnaire at the end of which they would define the sentence.

Before answering the questions, however, the participants should throw a pair of dice in order to determine the prosecutor’s demand. Half of the dice were loaded in order to show always the numbers 1 and 2. And the other half was loaded in order to indicate 3 and 6. The sum of the numbers should indicate the prosecutor’s sentencing demand. Afterwards, they should answer questions about legal issues concerning the case, including the sentencing decision. The researchers found that the results of the dice had an actual impact on their proposed sentence: the average penalty imposed by judges who had dice with superior results (3 + 6 = 9) was 7.81 months in prison, while the participants whose dice resulted in lower values ​​(1 +2 = 3) , proposed an average punishment of 5.28 months .

In another study, it was found that, on average, tired and hungry judges end up taking the easy decision to deny parole rather than to grant it. In the study, conducted in Israel, researchers divided the day’s schedule of judges into three sessions. At the beginning of which of them, the participants could rest and eat. It turned out that, soon after eating and resting, judges authorized the parole in 65% of cases. At the end of each session, the rate fell to almost zero. Okay, this is not really a cognitive bias, but a factual condition – however, it shows that a tired mind and energy needs can induce decisions that almost everyone would consider as intrinsically unfair.

And so on. Study after study , research shows that (1) our ability to develop moral reasoning is innate, (2) our mind is filled with innate biases that are needed to process cultural information in relation to compliance with moral/legal norms, and (3) these biases affect our rationality.

These researches raise many questions that will have to be faced sooner or later by legal scholars. Would anyone say that due process of law is respected when judges anchors judicial decision in completely external factors – factors about which they aren’t even aware of! Of course, this experiment was done in a controlled experiment and nobody expects that a judge rolls dice before judging a case. But judge might be influenced by other anchors as well, such as numbers inside a clock, a date on the calendar, or a number printed on a dollar banknote? Or would anyone consider due process was respected even if a parole hadn’t been granted because the case was judged late in the morning? These external elements decisively influenced the judicial outcome, but none of them were mentioned in the decision.

Legal theory needs to incorporate this knowledge on its structure. We need to build institutions capable to take biases into account and, as far as possible, try to circumvent them or, at least, diminish their influence. For instance, by knowing that judges tend to get impatient and harsher against defendants when they are hungry and tired, a Court could force him to take a 30 minute break after 3 hours of work in order to restore their capacity to be as impartial as possible. This is just a small suggestion about how institutions could respond to these discoveries.

Of course, there are  more complex cases, such as the discussion about criminals who always had displayed good behavior, but who were misfortunate to develop a brain tumor that influenced the commitment of a crime. Criminal theory is based on the thesis that the agent must intentionally engage in criminal conduct. But is it is possible to talk about intention when a tumor was one direct cause of the result? And if it hadn’t been a tumor, but a brain malformation (as it occurs in many cases of psychopathy)? Saying that criminal law could already solve these cases by considering that the criminal had no responsibility due to his condition wouldn’t solve the problem, because the issue is in the very concept of intention that is assumed in legal theory.

And this problem expands into the rest of the legal theory. We must take into account the role of cognitive biases in consumer relations. The law has not realized the role of these biases in decision making, but many companies are aware of them. How many times haven’t you bought a 750 ml soda for $2.00 just because it cost $0.20 more than a 500 ml one? Possibly, you thought that you payed less per ml than you would pay if you had bought the smaller size. But … you really wanted was 500 ml, and would pay less than you payed for taking extra soda that you didn’t want! In other words, the company just explores a particular bias that affects most people, in order to induce them to buy more of its products. Another example: for evolutionary reasons, humans are prone to consume fatty foods and lots of sugar. Companies exploit this fact to their advantage, which ends up generating part of the obesity crisis that we see in the world today. In their defense, companies say that consumers purchased the product on their own. What they do not say, but neurosciences and evolutionary theory say, is that our “free will” has a long evolutionary history that propels us to consume exactly these kinds of food that, over the years, affects our health. And law needs to take these facts into consideration if it wants to adequately protect and enforce consumer rights.

Law is still based on an “agency model” very similar to game theory’s assumption of rationality. But we are not rational. Every decision we make is influenced by the way our mind operates. Can we really think that it is fair to blame someone who committed a crime on the basis of erroneous results generated by a cognitive bias? And, on the other hand, would it be right to exonerate a defendant based on those assumptions? To answer these and other fringes questions, legal scholars must rethink the concept of person assumed by law, taking into account our intrinsic biological nature.

Related Situationist posts:

Image from Flickr

Posted in Legal Theory, Morality, Neuroscience, Philosophy | Tagged: , , , , , | 3 Comments »

“Ordinary Men” in Evil Situations

Posted by The Situationist Staff on October 3, 2013

ordinarymenA few excerpts from an outstanding 1992 New York Times book review by Walter Reich of Christopher Browning’s remarkable book, “Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland“:

We know a lot about how the Germans carried out the Holocaust. We know much less about how they felt and what they thought as they did it, how they were affected by what they did, and what made it possible for them to do it. In fact, we know remarkably little about the ordinary Germans who made the Holocaust happen — not the desk murderers in Berlin, not the Eichmanns and Heydrichs, and not Hitler and Himmler, but the tens of thousands of conscripted soldiers and policemen from all walks of life, many of them middle-aged, who rounded up millions of Jews and methodically shot them, one by one, in forests, ravines and ditches, or stuffed them, one by one, into cattle cars and guarded those cars on their way to the gas chambers.

In his finely focused and stunningly powerful book, “Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland,” Christopher R. Browning tells us about such Germans and helps us understand, better than we did before, not only what they did to make the Holocaust happen but also how they were transformed psychologically from the ordinary men of his title into active participants in the most monstrous crime in human history. In doing so he aims a penetrating searchlight on the human capacity for utmost evil and leaves us staring at his subject matter with the shock of knowledge and the lurking fear of self-recognition.

* * *

In the end, what disturbs the reader more than the policemen’s escape from punishment is their capacity — as the ordinary men they were, as men not much different from those we know or even from ourselves — to kill as they did.

Battalion 101’s killing wasn’t, as Mr. Browning points out, the kind of “battlefield frenzy” occasionally seen in all wars, when soldiers, having faced death, and having seen their friends killed, slaughter enemy prisoners or even civilians. It was, rather, the cold-blooded fulfillment of German national policy, and involved, for the policemen, a process of accommodation to orders that required them to do things they would never have dreamed they would ever do, and to justify their actions, or somehow reinterpret them, so that they would not see themselves as evil people.

Mr. Browning’s meticulous account, and his own acute reflections on the actions of the battalion members, demonstrate the important effect that the situation had on those men: the orders to kill, the pressure to conform, and the fear that if they didn’t kill they might suffer some kind of punishment or, at least, damage to their careers. In fact, the few who tried to avoid killing got away with it; but most believed, or at least could tell themselves, that they had little choice.

But Mr. Browning’s account also illustrates other factors that made it possible for the battalion’s ordinary men not only to kill but, ultimately, to kill in a routine, and in some cases sadistic, way. Each of these factors helped the policemen feel that they were not violating, or violating only because it was necessary, their personal moral codes.

One such factor was the justification for killing provided by the anti-Semitic rationales to which the policemen had been exposed since the rise of Nazism, rationales reinforced by the battalion’s officers. The Jews were presented not only as evil and dangerous but also, in some way, as responsible for the bombing deaths of German women and children. Another factor was the process of dehumanization: abetted by Nazi racial theories that were embraced by policemen who preferred not to see themselves as killers, Jews were seen as less than people, as creatures who could be killed without the qualms that would be provoked in them were they to kill fellow Germans or even Slavs. It was particularly when the German policemen came across German Jews speaking their own language, especially those from their own city, that they felt a human connection that made it harder to kill them.

The policemen were also helped by the practice of trying not to refer to their activities as killing: they were involved in “actions” and “resettlements.” Moreover, the responsibility wasn’t theirs; it belonged to the authorities — Major Trapp as well as, ultimately, the leaders of the German state — whose orders they were merely carrying out. Indeed, whatever responsibility they did have was diffused by dividing the task into parts and by sharing it with other people and processes. It was shared, first of all, by others in the battalion, some of whom provided cordons so that Jews couldn’t escape and some of whom did the shooting. It was shared by the Trawnikis, who were brought in to do the shooting whenever possible so that the battalion could focus on the roundups. And it was shared, most effectively, by the death camps, which made the men’s jobs immensely easier, since stuffing a Jew into a cattle car, though it sealed his fate almost as surely as a neck shot, left the actual killing to a machine-like process that would take place far away, one for which the battalion members didn’t need to feel personally responsible.

CLEARLY, ordinary human beings are capable of following orders of the most terrible kinds. What stands between civilization and genocide is the respect for the rights and lives of all human beings that societies must struggle to protect. Nazi Germany provided the context, ideological as well as psychological, that allowed the policemen’s actions to happen. Only political systems that recognize the worst possibilities in human nature, but that fashion societies that reward the best, can guard the lives and dignity of all their citizens.

* * *

Read the entire review here.  Read more about the book here.

Related Situationist posts:

Posted in Conflict, History, Ideology, Morality, Uncategorized | Leave a Comment »

Cheater’s Buzz

Posted by The Situationist Staff on September 14, 2013

exam cheating

From Newswire:

People who get away with cheating when they believe no one is hurt by their dishonesty are more likely to feel upbeat than remorseful afterward, according to new research published by the American Psychological Association.

Although people predict they will feel bad after cheating or being dishonest, many of them don’t, reports a study published online in APA’s Journal of Personality and Social Psychology.

“When people do something wrong specifically to harm someone else, such as apply an electrical shock, the consistent reaction in previous research has been that they feel bad about their behavior,” said the study’s lead author, Nicole E. Ruedy, of the University of Washington. “Our study reveals people actually may experience a ‘cheater’s high’ after doing something unethical that doesn’t directly harm someone else.”

Even when there was no tangible reward, people who cheated felt better on average than those who didn’t cheat, according to results of several experiments that involved more than 1,000 people in the U.S. and England. A little more than half the study participants were men, with 400 from the general public in their late 20s or early 30s and the rest in their 20s at universities.

Participants predicted that they or someone else who cheated on a test or logged more hours than they had worked to get a bonus would feel bad or ambivalent afterward. When participants actually cheated, they generally got a significant emotional boost instead, according to responses to questionnaires that gauged their feelings before and after several experiments.

In one experiment, participants who cheated on math and logic problems were overall happier afterward than those who didn’t and those who had no opportunity to cheat. The participants took tests on computers in two groups. In one group, when participants completed an answer, they were automatically moved to the next question. In the other group, participants could click a button on the screen to see the correct answer, but they were told to disregard the button and solve the problem on their own. Graders could see who used the correct-answer button and found that 68 percent of the participants in that group did, which the researchers counted as cheating.

People who gained from another person’s misdeeds felt better on average than those who didn’t, another experiment found. Researchers at a London university observed two groups in which each participant solved math puzzles while in a room with another person who was pretending to be a participant. The actual participants were told they would be paid for each puzzle they solved within a time limit and that the other “participant” would grade the test when the time was up. In one group, the actor inflated the participant’s score when reporting it to the experimenter. In the other group, the actor scored the participant accurately. None of the participants in the group with the cheating actor reported the lie, the authors said.

In another trial, researchers asked the participants not to cheat because it would make their responses unreliable, yet those who cheated were more likely to feel more satisfied afterward than those who didn’t. Moreover, the cheaters who were reminded at the end of the test how important it was not to cheat reported feeling even better on average than other cheaters who were not given this message, the authors said. Researchers gave participants a list of anagrams to unscramble and emphasized that they should unscramble them in consecutive order and not move on to the next word until the previous anagram was solved. The third jumble on the list was “unaagt,” which can spell only the word taguan, a species of flying squirrel. Previous testing has shown that the likelihood of someone solving this anagram is minuscule. The graders considered anyone who went beyond the third word to have cheated and found that more than half the participants did, the authors said.

“The good feeling some people get when they cheat may be one reason people are unethical even when the payoff is small,” Ruedy said. “It’s important that we understand how our moral behavior influences our emotions. Future research should examine whether this ‘cheater’s high’ could motivate people to repeat the unethical behavior.”

________________________________________

Article: “The Cheater’s High: The Unexpected Affective Benefits of Unethical Behavior,” Nicole E. Ruedy, PhD, University of Washington; Celia Moore, PhD, London Business School; Francesca Gino, PhD, Harvard University; and Maurice E. Schweitzer, PhD, University of Pennsylvania; Journal of Personality and Social Psychology, online, Sept. 3, 2013.

Related Situationist posts:

Posted in Emotions, Morality | 2 Comments »

The Situation of Secret Pleasures (more on Dan Wegner’s Work)

Posted by The Situationist Staff on July 18, 2013

telling secrets

This excerpt, which highlights some of the remarkable work by the late Dan Wegner, comes from an article written by Eric Jaffe in a 2006 edition of the APS’s Observer:

“Freud’s Fundamental Rule of Psychoanalysis was for patients to be completely open with a therapist no matter how silly or embarrassing the thought,” says Anita Kelly, a researcher at the University of Notre Dame who published one of the first books on the formal study of secrets, The Psychology of Secrets, in 2002.

Only since the late 1980s and early 1990s have researchers like Daniel Wegner and James Pennebaker put Freud through the empirical ringer and begun to understand the science behind secrets. “The Freudian way of thinking about things was, he assumed suppression took place and looked at what was happening afterwards,” says Wegner, a psychologist at Harvard. “The insight we had was, let’s not wait until after the fact and assume it occurred, let’s get people to try to do it and see what happened. That turned out to be useful insight; it opened this up to experimental research. It became a lab science instead of an after-the-fact interpretation of peoples’ lives.”

For Wegner, an interest in secrets began with a white bear. In Russian folklore attributed to Dostoevsky, Tolstoy, or sometimes both, a man tells his younger brother to sit in the corner and not think of a white bear, only to find later that the sibling can think of nothing else. If a meaningless white bear can arouse such frustration, imagine the crippling psychological effects of trying not to think of something with actual importance when the situation requires silence — running into the wife of a friend who has a mistress, being on a jury and having to disregard a stunning fact, or hiding homosexuality in a room full of whack-happy wiseguys.

So in 1987, Wegner, who at that time was at Trinity University, published a paper in the Journal of Personality and Social Psychology discussing what happens when research subjects confront the white bear in a laboratory. In the study, subjects entered a room alone with a tape recorder and reported everything that came to mind in a five-minute span. Before the experiment, Wegner told some subjects to think of anything except a white bear, and told others to try to think of a white bear. Afterwards, the subjects switched roles. Any time a subject mentioned, or merely thought of, a white bear, he or she had to ring a bell inside the room.

It was not quite Big Ben at noon, but those who suppressed the white bear rang the bell once a minute — more often than subjects who were free to express the thought. More remarkably, Wegner found what he called the “rebound effect”: When a subject was released from suppression and told to express a hidden thought, it poured out with greater frequency than if it had been mentionable from the start. (Think fresh gossip.) He also found evidence for an insight called “negative cuing.” The idea is that a person trying to ditch a thought will look around for something to displace it — first at the ceiling fan, then a candle, then a remote control. Soon the mind forms a latent bond between the unwanted thought and the surrounding items, so that everything now reminds the person of what he is trying to forget, exacerbating the original frustration.

“People will tend to misread the return of unwanted thoughts,” Wegner said recently. “We don’t realize that in keeping it secret we’ve created an obsession in a jar.” Wegner told the story of a suicidal student who once called him for help. Desperate to keep her on the phone, but lacking any clinical training, Wegner mentioned the white bear study. Slowly the student realized she had perpetuated a potentially fleeting thought by trying to avoid it. “She got so twisted up in the fact that she couldn’t stop thinking of killing herself, that she was making it come back to mind. She was misreading this as, there’s some part of me that wants to do it. What she really wanted was to get rid of the thought.”

One method of diverging attention from an unwanted thought, says Wegner, is to focus on a single distraction from the white bear, like a red Volkswagen, an idea that he tested successfully in later experiments. The concern with this technique, which Freud first laid out, is that a person could become obsessed with an arbitrary item, planting the seeds for abnormal behavior. In a later experiment, published in 1994 in the same journal, Wegner found more evidence that secrets lead to strange obsession. He placed four subjects who had never met around a table, split them into two male-female teams, and told them to play a card game. One team was instructed to play footsie without letting the other team know. At the end of the experiment, the secret footsie-players felt such a heightened attraction toward one another that the experimenters made them leave through separate doors, for ethical reasons. “We can end up being in a relationship we don’t want, or interested in things that aren’t at all important, because we had to keep them quiet,” Wegner said, “and it ends up growing.”

Live Free or Die

The logical opposite of an unhealthy obsession based on secrets is a healthy result from disclosing such secrets. This healing aspect of revelation is where Wegner’s work connects with James Pennebaker’s. In the late 1970s, Pennebaker was part of a research team that found, via survey, that people who had a traumatic sexual experience before age 17 were more likely to have health problems as they got older. Pennebaker looked further and found that the majority of these people had kept the trauma hidden, and in 1984 he began the first of many studies on the effects of revealing previously undisclosed secrets.

In most of Pennebaker’s experiments, subjects visited a lab for three or four consecutive days, each time writing about traumatic experiences for 15 or 20 minutes. In the first five years, hundreds of people poured their secrets onto the page. A college girl who knew her father was seeing his secretary; a concentration camp survivor who had seen babies tossed from a second-floor orphanage window; a Vietnam veteran who once shot a female fighter in the leg, had sex with her, then cut her throat. By the end of the experiment, many participants felt such intense release that their handwriting became freer and loopier. In one study of 50 students, those who revealed both a secret and their feelings visited the health center significantly fewer times in the ensuing six months than other students who had written about a generic topic, or those who had only revealed the secret and not the emotions surrounding it.

The work led to many papers showing evidence that divulging a secret, which can mean anything from telling someone to writing it on a piece of paper that is later burned, is correlated with tangible health improvements, both physical and mental. People hiding traumatic secrets showed more incidents of hypertension, influenza, even cancer, while those who wrote about their secrets showed, through blood tests, enhanced immune systems. In some cases, T-cell counts in AIDS patients increased. In another test, Pennebaker showed that writing about trauma actually unclogs the brain. Using an electroencephalogram, an instrument that measures brain waves through electrodes attached to the scalp, he found that the right and left brains communicated more frequently in subjects who disclosed traumas.

(It should be noted that the type of secrets discussed in this article are personal secrets—experiences a person chooses not to discuss with others. They can be positive, in the case of hiding a birthday cake, or negative, in the case of hiding a mistress. Secrets that could be considered “non-personal,” for example, information concealed as part of a job, were not specifically addressed.)

Exactly why revelation creates such health benefits is a complicated question. “Most people in psychology have been trained to think of a single, parsimonious explanation for an event,” said Pennebaker, who did much of his research at Southern Methodist University before coming to the University of Texas, where he is chair of the psychology department. “Well, welcome to the real world. There are multiple levels of explanation here.” Pennebaker lists a number of reasons for the health improvements. Writing about a secret helps label and organize it, which in turn helps understand features of the secret that had been ignored. Revelation can become habitual in a positive sense, making confrontation normal. Disclosure can reduce rumination and worry, freeing up the mental quagmires that hindered social relationships. People become better listeners. They even become better sleepers. “The fact is that all of us occasionally are dealing with experiences that are hard to talk about,” Pennebaker said. “Getting up and putting experiences into words has a powerful effect.”

At the end of a recent Sopranos episode, Vito looks most content after seeing a New Hampshire license plate, with its state motto: “Live free or die.” Pennebaker’s research may add a new level of truth to that phrase.

Little Machiavellis

In the early 1990s, it was not unusual for 3-year-old Jeremy Peskin to want a cookie. His mother, Joan, used to hide them in the high cupboards of their home in Toronto; when she left, Jeremy would climb up and sneak a few. One day, Jeremy had a problem: He wanted a cookie, but his mother was in the kitchen. “He said to me, ‘Go out of the kitchen, because I want to take a cookie,’ ” Joan recalled recently. Unfortunately for Jeremy, Joan Peskin was a doctorate student in psychology at the time, and smart enough to see through the ruse. Fortunately for developmental researchers, Peskin’s experience led her to study when children first develop the capacity for secrets.

What interested Peskin, now a professor at the University of Toronto, was Jeremy’s inability to separate his mother’s physical presence from her mental state. If she was out of the room, he would be able to take a cookie, whether or not his mother knew that he intended to take a cookie. Peskin took this insight to the laboratory — in this case, local day-care centers — where she tried to get children age three, four, and five to conceal a secret. She showed the children two types of stickers. The first, a gaudy, glittery sticker, aroused many a tiny smile; the second, a drab, beige sticker of an angel, was disliked. Then she introduced a mean puppet and explained that this puppet would take whatever sticker the children wanted most. When the puppet asked 4- and 5-year-olds which sticker they wanted, most of the children either lied or would not tell. The 3-year-olds almost always blurted out their preference, even when the scenario was repeated several times, she found in the study, which was published in Developmental Psychology in 1992. Often the 3-year-olds grabbed at the shiny sticker as the puppet took it away, showing a proper understanding of the situation but an inability to prevent it via secretive means.

The finding goes beyond secrets; 4 has become the age when psychologists think children develop the ability to understand distinct but related inner and outer worlds. “When I teach it I put a kid on the overhead with a thought bubble inside,” Peskin said. “When they could think of someone else’s mental state — say, ignorance, somebody not knowing something — that influences their social world.” In a follow-up study published in Social Development in 2003, Peskin found again that 3-year-olds were more likely than 4- or 5-year-olds to reveal the location of a surprise birthday cake to a hungry research confederate. “When a child is able to keep a secret,” Peskin says, “parents should take it as, that’s great, this is normal development. They aren’t going to be little Machiavellis. This is normal brain development.”

Confidence in Confidants

Soon after Mark Felt revealed himself as Deep Throat, the anonymous source who guided Bob Woodward during the Watergate scandal, Anita Kelly’s phone began to ring. “One morning I had 10 messages from different news groups,” she recalled recently. “They wanted me to say that secrecy’s a bad thing, and I’d say, look, there’s no evidence. This guy’s in his early 90s, and has seemed to have a healthy life.”

When preparing The Psychology of Secrets, Kelly re-examined the consequences and benefits of secret-keeping, and began to believe that while divulging secrets improves health, concealing them does not necessarily cause physical problems. “I couldn’t find any evidence that keeping a secret makes a person sick,” Kelly said. “There is evidence that by writing about held-back information someone will get health benefits. Someone keeping a secret would miss out on those benefits. It’s not the same as saying if you keep a secret you’re going to get sick.”

Her latest work, in press at the Journal of Personality, challenged the notion that secret-keeping can cause sickness. Instead of merely looking at instances of sickness nine weeks after disclosure, Kelly and co-author Jonathan Yip adjusted their measurements for initial levels of health. They found, quite simply, that secretive people also tend to be sick people, both now and two months down the line.

“It doesn’t look like the process of keeping the secret made them sick,” she said. High “self-concealers,” as Kelly calls them, tend to be more depressed, anxious, and shy, and have more aches and pains by nature, perhaps suggesting some natural link between being secretive and being vulnerable to illness. “I don’t think it’s much of a stretch to say that being secretive could be linked to being symptomatic at a biological level.”

This conclusion came gradually. In the mid-1990s, following Pennebaker’s line of research that had really opened up the field, Kelly focused on the health effects of revealing and concealing secrets. The research clearly showed links between secrets and illness. In a review of the field for Current Directions in Psychological Science in 1999, Kelly notes some of these health correlations: cases in which breast cancer patients who talked about their concealed emotions survived almost twice as long as those who did not; students who wrote about private traumatic events showed higher antibody levels four and six months after a Hepatitis B vaccination; and gay men who concealed their sexuality had a higher rate of cancer and infectious disease.

But in 1998 she did a study asking patients about their relationships with their therapists. She found that 40 percent of them were keeping a secret, but generally felt no stress as a result. Kelly began to believe that some secrets can be kept successfully, and that, in some scenarios, disclosing a secret could cause more problems than it solves. Psychologists, she felt, were not paying enough attention to the situations in which disclosure should occur — only that it did. “The essence of the problem with revealing personal information is that revealers may come to see themselves in undesirable ways if others know their stigmatizing secrets,” she wrote in the 1999 paper.

John Caughlin, a professor of communication at the University of Illinois at Urbana-Champaign who has studied secrets, agrees that sometimes openness is not the best policy. “People are so accustomed to saying an open relationship is a good one, that if they have secrets it can make them feel that something’s wrong,” he said recently. In 2005, Caughlin published a paper in Personal Relationships suggesting that people have a poor ability to forecast how they will feel after revealing a secret, and how another person will respond to hearing it. “I’m not touting that people should keep a lot of secrets,” he said, “but I don’t think people should assume it’s bad, and I think they do.” In her new book, Anatomy of a Secret Life, published in April, Gail Saltz, a professor of psychiatry at Cornell Medical School, referred to secrets as “benign” or “malignant,” depending on the scenario. “In teenagers, having secret identities is normal, healthy separation from parents and needs to go on,” said Saltz recently.

To address this concern, Kelly has focused her recent work on the role of confidants in the process of disclosure. She created a simple diagram advising self-concealers when they should, and when they should not, reveal a secret. On one hand, if the secret does not cause mental or physical stress, it should be kept, to provide a sense of personal boundary and avoid unnecessary social conflict. If it does cause anguish, the secret-keeper must then evaluate whether he or she has a worthy confidant, someone willing to work toward a cathartic insight. When such a confidant is not available, the person should write down his or her thoughts and feelings. “The world changes when you tell someone who knows all your friends,” said Kelly, who experienced this change firsthand 15 years back, when she shared with a colleague something “very personal and embarrassing,” as she called it, and then found her secret floating among her colleagues. “You have to think, what are the implications with my reputation,” she said. “It’s more complicated once you have to reveal to someone.”

To review a collection of Situationist posts discussing Dan Wegner’s research, click here.

Posted in Emotions, Life, Morality, Social Psychology | 1 Comment »

The Situation Cheating Students

Posted by The Situationist Staff on June 29, 2013

honor or cheating

From American Psychological Association (excerpts from an article by Ann Novotney):

More than half of teenagers say they have cheated on a test during the last year — and 34 percent have done it more than twice — according to a survey of 40,000 U.S. high school students released in February by the nonprofit Josephson Institute of Ethics. The survey also found that one in three students admitted they used the Internet to plagiarize an assignment.

The statistics don’t get any better once students reach college. In surveys of 14,000 undergraduates conducted over the past four years by Donald McCabe, PhD, a business professor at Rutgers University and co-founder of Clemson University’s International Center for Academic Integrity, about two-thirds of students admit to cheating on tests, homework and assignments. And in a 2009 study in Ethics & Behavior (Vol. 19, No. 1), researchers found that nearly 82 percent of a sample of college alumni admitted to engaging in some form of cheating as undergraduates.

Some research even suggests that academic cheating may be associated with dishonesty later in life. In a 2007 survey of 154 college students, Southern Illinois University researchers found that students who plagiarized in college reported that they viewed themselves as more likely to break rules in the workplace, cheat on spouses and engage in illegal activities (Ethics & Behavior, Vol. 17, No. 3). A 2009 survey, also by the Josephson Institute of Ethics, reports a further correlation: People who cheat on exams in high school are three times more likely to lie to a customer or inflate an insurance claim compared with those who never cheated. High school cheaters are also twice as likely to lie to or deceive their boss and one-and-a-half times more likely to lie to a significant other or cheat on their taxes.

Academic cheating, therefore, is not just an academic problem, and curbing this behavior is something that academic institutions are beginning to tackle head-on, says Stephen F. Davis, PhD, emeritus professor of psychology at Emporia State University and co-author of “Cheating in School: What We Know and What We Can Do” (Wiley-Blackwell, 2009). New research by psychologists seems to suggest that the best way to prevent cheating is to create a campus-wide culture of academic integrity.

“Everyone at the institution — from the president of the university and the board of directors right on down to every janitor and cafeteria worker — has to buy into the fact that the school is an academically honest institution and that cheating is a reprehensible behavior,” Davis says.

Why students cheat

The increasing amount of pressure on students to succeed academically — in efforts to get into good colleges, graduate schools and eventually to land good jobs — tends to be one of the biggest drivers of cheating’s proliferation. Several studies show that students who are more motivated than their peers by performance are more likely to cheat.

“What we show is that as intrinsic motivation for a course drops, and/or as extrinsic motivation rises, cheating goes up,” says Middlebury College psychology professor Augustus Jordan, PhD, who led a 2005 study on motivation to cheat (Ethics and Behavior, Vol. 15, No. 2). “The less a topic matters to a person, or the more they are participating in it for instrumental reasons, the higher the risk for cheating.”

Psychological research has also shown that dishonest behaviors such as cheating actually alter a person’s sense of right and wrong, so after cheating once, some students stop viewing the behavior as immoral. In a study published in March in Personality and Social Psychology Bulletin (Vol. 37, No. 3), for example, Harvard University psychology and organizational behavior graduate student Lisa Shu and colleagues conducted a series of experiments, one of which involved having undergraduates read an honor code reminding them that cheating is wrong and then providing them with a series of math problems and an envelope of cash. The more math problems they were able to answer correctly, the more cash they were allowed to take. In one condition, participants reported their own scores, which gave them an opportunity to cheat by misreporting. In the other condition, participants’ scores were tallied by a proctor in the room. As might be expected, several students in the first condition inflated their scores to receive more money. These students also reported a greater degree of cheating acceptance after participating in the study than they had prior to the experiment. They also found that, while those who read the honor code were less likely to cheat, the honor code did not eliminate all of the cheating,

“Our findings confirm that the situation can, in fact, impact behavior and that people’s beliefs flex to align with their behavior,” Shu says.

Another important finding is that while many students understand that cheating is against the rules, most still look to their peers for cues as to what behaviors and attitudes are acceptable, says cognitive psychologist David Rettinger, PhD, of the University of Mary Washington. Perhaps not surprisingly, he says, several studies suggest that seeing others cheat increases one’s tendency to cheat.

“Cheating is contagious,” says Rettinger. In his 2009 study with 158 undergraduates, published in Research in Higher Education (Vol. 50, No. 3), he found that direct knowledge of others’ cheating was the biggest predictor of cheating.

Even students at several U.S. military academies — where student honor codes are widely publicized and strictly enforced — aren’t immune from cheating’s contagion. A longitudinal study led by University of California, Davis, economist Scott Carrell, PhD, examined survey data gathered from the U.S. Military Academy at West Point, U.S. Naval Academy and U.S. Air Force Academy from 1959 through 2002. Carrell found that, thanks to peer effects, one new college cheater is “created” through social contagion for every two to three additional high school cheaters admitted to a service academy.

“This behavior is most likely transmitted through the knowledge that other students are cheating,” says Carrell, who conducted the study with James West, PhD, and Frederick Malmstrom, PhD, both of the Air Force Academy. “This knowledge causes students — particularly those who would not have otherwise — to cheat because they feel like they need to stay competitive and because it creates a social norm of cheating.”

Dishonesty prevention

Peer effects, however, cut both ways, and getting students involved in creating a culture of academic honesty can be a great way to curb cheating.

“The key is to create this community feeling of disgust at the cheating behavior,” says Rettinger. “And the best way to do that is at the student level.”

* * *

Teachers can also help diminish students’ impulse to cheat by explaining the purpose and relevance of every academic lesson and course assignment, says University of Connecticut educational psychologist Jason Stephens, PhD. According to research presented in 2003 by Stephens and later published in the “The Psychology of Academic Cheating” (Elsevier, 2006), high school students cheat more when they see the teacher as less fair and caring and when their motivation in the course is more focused on grades and less on learning and understanding. In addition, in a 1998 study of cheating with 285 middle school students, Ohio State University educational psychologist Eric Anderman, PhD, co-editor with Tamara Murdock, PhD, of “The Psychology of Academic Cheating,” found that how teachers present the goals of learning in class is key to reducing cheating. Anderman showed that students who reported the most cheating perceive their classrooms as being more focused on extrinsic goals, such as getting good grades, than on mastery goals associated with learning for its own sake and continuing improvement (Journal of Educational Psychology, Vol. 90, No. 1).

“When students feel like assignments are arbitrary, it’s really easy for them to talk themselves into not doing it by cheating,” Rettinger says. “You want to make it hard for them to neutralize by saying, ‘This is what you’ll learn and how it’s useful to you.’”

At the college level in particular, it’s also important for institutional leaders to make fairness a priority by having an office of academic integrity to communicate to students and faculty that the university takes the issue of academic dishonesty seriously, says Tricia Bertram Gallant, PhD, academic integrity coordinator at the University of California, San Diego, and co-author with Davis of “Cheating in School.” . . .

* * *

There’s also evidence that focusing on honesty, trust, fairness, respect and responsibility and promoting practices such as effective honor codes can make a significant difference in student behaviors, attitudes and beliefs, according to a 1999 study by the Center for Academic Integrity. Honor codes seem to be particularly salient when they engage students, however. In Shu’s study on the morality of cheating, for example, she found that participants who passively read a generic honor code before taking a test were less likely to cheat on the math problems, though this step did not completely curb cheating. Among those who signed their names attesting that they’d read and understood the honor code, however, no cheating occurred.

“It was impressive to us how exposing participants to an honor code and really making morality salient in that situation basically eliminated cheating altogether,” she says.

Read entire article here.

Related Situationist posts:

Posted in Education, Morality, Social Psychology | Leave a Comment »

Dan Ariely on the Psychology of Cheating

Posted by The Situationist Staff on June 6, 2013

Behavioral economist Dan Ariely studies the bugs in our moral code: the hidden reasons we think it’s OK to cheat or steal (sometimes). Clever studies help make his point that we’re predictably irrational — and can be influenced in ways we can’t grasp.

Related Situationist posts:

Posted in Morality, Social Psychology, Video | 2 Comments »

The Relational Situation of Whistle-Blowing and Ethical Behavior

Posted by The Situationist Staff on May 30, 2013

whistleEarlier this week, NPR broadcast an excellent (situationist) story titled  “Why Do Whistle-Blowers Become Whistle-Blowers?” by David Greene and Shankar Vedantam.  In it, they discussed recent research by David Mayer and his co-authors (Mayer, D. M., Nurmohamed, S., Treviño, L. K., Shapiro, D. L., & Schminke, M. 2013. Encouraging employees to report unethical conduct internally: It takes a village. Organizational Behavior and Human Decision Processes, 121: 89-103).

Listen to their story by clicking here.

Related Situationist posts:

Posted in Morality, Social Psychology | Leave a Comment »

The Cheater’s Situation

Posted by The Situationist Staff on May 25, 2013

Liars

From a very good 2011 NYTimes article by Benedict Carey, here are a few excerpts on some of the psychological dynamics behind cheating:

[P]aradoxically, it’s often an obsession with fairness that leads people to begin cutting corners in the first place.

“Cheating is especially easy to justify when you frame situations to cast yourself as a victim of some kind of unfairness,” said Dr. Anjan Chatterjee, a neurologist at the University of Pennsylvania who has studied the use of prescription drugs to improve intellectual performance. “Then it becomes a matter of evening the score; you’re not cheating, you’re restoring fairness.”

The boilerplate tale of a good soul gone wrong is well known. It begins with small infractions — illegally downloading a few songs, skimming small amounts from the register, lies of omission on taxes — and grows by increments. The experiment becomes a hobby that becomes a way of life. In a recent interview with New York magazine, Bernard Madoff said his Ponzi scheme grew slowly from an investment advisory business that he began as a sideline for certain clients.

This slippery-slope story obscures the process of moving to the dark side; namely, that people subconsciously seek shortcuts more than they realize — and make a deliberate decision when they begin to cheat in earnest.

In a series of recent studies, Dan Ariely of Duke University and his colleagues gave college students opportunities to cheat on a general knowledge test. In one, students were instructed to transfer their answers onto a form with color-in bubbles, to register their official score. Some received bubble sheets with the correct answers seemingly inadvertently shaded in gray, and changed about 20 percent of their answers. A follow-up study demonstrated that they were unaware of the magnitude of their dishonesty. They were cheating without being fully aware of it.

Yet the behavior changes once a clear rule is in place. “If you specifically tell people in these studies not to use the answer key and just sign their name,” said Zoe Chance, a doctoral student at Harvard who worked on some of the experiments, “they won’t look at it.”

David DeSteno, a psychologist at Northeastern University in Boston and co-author of the . . . book “Out of Character,” about deception and other misbehavior, said: “With all of these kinds of decisions there’s a battle between short- and long-term gains, a tension between the more virtuous choice and the less virtuous one. And of course there are outside factors that can sway that arrow to one side or another.”

That is, low-level cheating may be natural and even productive in some situations; the brain naturally seeks useful shortcuts. But most people tend to follow rules they accept as fair, even when they have the opportunity and a strong incentive to break them.

In short, the move from small infractions to a deliberate pattern of deception or fraud is less an incremental slide than a deliberate strategy. And in most people it takes shape for personal, and often very emotional, reasons, psychologists say.

One of the most obvious of these is resentment of an authority or a specific rule. The evidence of this is easy enough to see in everyday life, with people flouting laws about cellphone use, smoking, the wearing of helmets. In studies of workplace behavior, psychologists have found that in situations where bosses are abusive, many employees withhold the unpaid extras that help an organization, like being courteous to customers or helping co-workers with problems.

Yet perhaps the most powerful urge to cheat stems from a deep sense of unfairness, psychologists say. As people first begin to compete and compare themselves with others, as early as middle school, they also begin to learn of others’ hidden advantages. Private tutors. Family money. Alumni connections. A regular golf game with the boss. Against a competitor with such advantages, taking credit for other people’s work at the office is not only easier, it can seem only fair.

Once the cheating starts, it’s natural to impute it to others. “When it comes to negative characteristics, we tend to overestimate how much others have in common with us,” said David Dunning, a psychologist at Cornell University.

That is to say: A corner cutter often begins to think everyone else is cheating after he has started cheating, not before.

“And if they are subsequently rewarded for the extra productivity, they tend to internalize the feeling of pride and view their success as due to inherent ability and not something else they were using,” said Dr. DeSteno.

Finally, in the winner-take-all environment that characterizes many competitive fields, cheating feels like a hedge against that most degrading sensation: being a chump. The fear of finishing out of the money and hearing someone say, “Wait, you mean to tell me you could have and you didn’t?” Psychologists argue that the sensation of being duped — anger, self-blame, bitterness — is such a singular cocktail that it forces an uncomfortable kind of self-awareness.

How much of a fool am I? How did I not see this?

It happens every day to people who resist cheating. Nothing fair about it.

Read the entire article here.

Related Situationist posts:

Posted in Conflict, Morality, Uncategorized | Leave a Comment »

The Situational Benefits of Compassion

Posted by The Situationist Staff on May 20, 2013

Dorothea Lange Damaged ChildEmma Seppala, for The Observer, has an outstanding overview of some of the health consequences and contagiousness of compassion.  Here is a portion of her article:

Decades of clinical research has focused and shed light on the psychology of human suffering. That suffering, as unpleasant as it is, often also has a bright side to which research has paid less attention: compassion. Human suffering is often accompanied by beautiful acts of compassion by others wishing to help relieve it. What led 26.5 percent of Americans to volunteer in 2012 (according to statistics from the US Department of Labor)? What propels someone to serve food at a homeless shelter, pull over on the highway in the rain to help someone with a broken down vehicle, or feed a stray cat?

What is Compassion?

What is compassion and how is it different from empathy or altruism? The definition of compassion is often confused with that of empathy. Empathy, as defined by researchers, is the visceral or emotional experience of another person’s feelings. It is, in a sense, an automatic mirroring of another’s emotion, like tearing up at a friend’s sadness. Altruism is an action that benefits someone else. It may or may not be accompanied by empathy or compassion, for example in the case of making a donation for tax purposes. Although these terms are related to compassion, they are not identical. Compassion often does, of course, involve an empathic response and an altruistic behavior. However, compassion is defined as the emotional response when perceiving suffering and involves an authentic desire to help.

Is Compassion Natural or Learned?

Though economists have long argued the contrary, a growing body of evidence suggests that, at our core, both animals and human beings have what APS Fellow Dacher Keltner at the University of California, Berkeley, coins a “compassionate instinct.” In other words, compassion is a natural and automatic response that has ensured our survival. Research by APS Fellow Jean Decety, at the University of Chicago, showed that even rats are driven to empathize with another suffering rat and to go out of their way to help it out of its quandary. Studies with chimpanzees and human infants too young to have learned the rules of politeness, also back up these claims. Michael Tomasello and other scientists at the Max Planck Institute, in Germany, have found that infants and chimpanzees spontaneously engage in helpful behavior and will even overcome obstacles to do so. They apparently do so from intrinsic motivation without expectation of reward. A recent study they ran indicated that infants’ pupil diameters (a measure of attention) decrease both when they help and when they see someone else helping, suggesting that they are not simply helping because helping feels rewarding. It appears to be the alleviation of suffering that brings reward — whether or not they engage in the helping behavior themselves. Recent research by David Rand at Harvard University shows that adults’ and children’s first impulse is to help others. Research by APS Fellow Dale Miller at Stanford’s Graduate School of Business suggests that this is also the case of adults, however, worrying that others will think they are acting out of self-interest can stop them from this impulse to help.

It is not surprising that compassion is a natural tendency since it is essential for human survival. As has been brought to light by Keltner, the term “survival of the fittest,” often attributed to Charles Darwin, was actually coined by Herbert Spencer and Social Darwinists who wished to justify class and race superiority. A lesser known fact is that Darwin’s work is best described with the phrase “survival of the kindest.” Indeed in The Descent of Man and Selection In Relation to Sex, Darwin argued for “the greater strength of the social or maternal instincts than that of any other instinct or motive.” In another passage, he comments that “communities, which included the greatest number of the most sympathetic members, would flourish best, and rear the greatest number of offspring.” Compassion may indeed be a naturally evolved and adaptive trait. Without it, the survival and flourishing of our species would have been unlikely.

One more sign that suggests that compassion is an adaptively evolved trait is that it makes us more attractive to potential mates. A study examining the trait most highly valued in potential romantic partners suggests that both men and women agree that “kindness” is one of the most highly desirable traits.

Compassion’s Surprising Benefits for Physical and Psychological Health

Compassion may have ensured our survival because of its tremendous benefits for both physical and mental health and overall well-being. Research by APS William James Fellow Ed Diener, a leading researcher in positive psychology, and APS James McKeen Cattell Fellow Martin Seligman, a pioneer of the psychology of happiness and human flourishing, suggests that connecting with others in a meaningful way helps us enjoy better mental and physical health and speeds up recovery from disease; furthermore, research by Stephanie Brown, at Stony Brook University, and Sara Konrath, at the University of Michigan, has shown that it may even lengthen our life spans.

The reason a compassionate lifestyle leads to greater psychological well-being may be explained by the fact that the act of giving appears to be as pleasurable, if not more so, as the act of receiving. A brain-imaging study headed by neuroscientist Jordan Grafman from the National Institutes of Health showed that the “pleasure centers” in the brain, i.e., the parts of the brain that are active when we experience pleasure (like dessert, money, and sex), are equally active when we observe someone giving money to charity as when we receive money ourselves! Giving to others even increases well-being above and beyond what we experience when we spend money on ourselves. In a revealing experiment by Elizabeth Dunn, at the University of British Columbia, participants received a sum of money and half of the participants were instructed to spend the money on themselves; the other half was told to spend the money on others. At the end of the study,  which was published in the academic journal Science, participants who had spent money on others felt significantly happier than those who had spent money on themselves.

This is true even for infants. A study by Lara Aknin and colleagues at the University of British Columbia shows that even in children as young as two, giving treats to others increases the givers’ happiness more than receiving treats themselves. Even more surprisingly, the fact that giving makes us happier than receiving is true across the world, regardless of whether countries are rich or poor. A new study by Aknin, now at Simon Fraser University, shows that the amount of money spent on others (rather than for personal benefit) and personal well-being were highly correlated, regardless of income, social support, perceived freedom, and perceived national corruption.

Why is Compassion Good For Us?

Why does compassion lead to health benefits in particular? A clue to this question rests in a fascinating new study by Steve Cole at the University of California, Los Angeles, and APS Fellow Barbara Fredrickson at the University of North Carolina at Chapel Hill. The results were reported at Stanford Medical School’s Center for Compassion and Altruism Research and Education’s (CCARE) inaugural Science of Compassion conference in 2012. Their study evaluated the levels of cellular inflammation in people who describe themselves as “very happy.” Inflammation is at the root of cancer and other diseases and is generally high in people who live under a lot of stress. We might expect that inflammation would be lower for people with higher levels of happiness. Cole and Fredrickson found that this was only the case for certain “very happy” people. They found that people who were happy because they lived the “good life” (sometimes also know as “hedonic happiness”) had high inflammation levels but that, on the other hand, people who were happy because they lived a life of purpose or meaning (sometimes also known as “eudaimonic happiness”) had low inflammation levels. A life of meaning and purpose is one focused less on satisfying oneself and more on others. It is a life rich in compassion, altruism, and greater meaning.

Another way in which a compassionate lifestyle may improve longevity is that it may serve as a buffer against stress. A new study conducted on a large population (more than 800 people) and spearheaded by the University at Buffalo’s Michael Poulin found that stress did not predict mortality in those who helped others, but that it did in those who did not. One of the reasons that compassion may protect against stress is the very fact that it is so pleasurable. Motivation, however, seems to play an important role in predicting whether a compassionate lifestyle exerts a beneficial impact on health. Sara Konrath, at the University of Michigan, discovered that people who engaged in volunteerism lived longer than their non-volunteering peers — but only if their reasons for volunteering were altruistic rather than self-serving.

Another reason compassion may boost our well-being is that it can help broaden our perspective beyond ourselves. Research shows that depression and anxiety are linked to a state of self-focus, a preoccupation with “me, myself, and I.” When you do something for someone else, however, that state of self-focus shifts to a state of other-focus. If you recall a time you were feeling blue and suddenly a close friend or relative calls you for urgent help with a problem, you may remember that as your attention shifts to helping them, your mood lifts. Rather than feeling blue, you may have felt energized to help; before you knew it, you may even have felt better and gained some perspective on your own situation as well.

Finally, one additional way in which compassion may boost our well-being is by increasing a sense of connection to others. One telling study showed that lack of social connection is a greater detriment to health than obesity, smoking, and high blood pressure. On the flip side, strong social connection leads to a 50 percent increased chance of longevity. Social connection strengthens our immune system (research by Cole shows that genes impacted by social connection also code for immune function and inflammation), helps us recover from disease faster, and may even lengthen our life. People who feel more connected to others have lower rates of anxiety and depression. Moreover, studies show that they also have higher self-esteem, are more empathic to others, more trusting and cooperative and, as a consequence, others are more open to trusting and cooperating with them. Social connectedness therefore generates a positive feedback loop of social, emotional, and physical well-being. Unfortunately, the opposite is also true for those who lack social connectedness. Low social connection has been generally associated with declines in physical and psychological health, as well as a higher propensity for antisocial behavior that leads to further isolation. Adopting a compassionate lifestyle or cultivating compassion may help boost social connection and improve physical and psychological health.

Read the entire article, including sections on “why compassion really does have the ability to change the world” and “cultivating compassion” here.

Related Situationist posts:

Posted in Altruism, Distribution, Emotions, Morality, Positive Psychology | 2 Comments »

Frontier Tort – Selling Beer in Whiteclay

Posted by The Situationist Staff on April 15, 2013

Alcoholism Cover Small

At Harvard Law School in the fall of 2012, the 80 students in Professor Hanson’s situationist-orient torts class participated in an experimental group project in their first-year torts class. The project required students to research, discuss, and write a white paper about a current policy problem for which tort law (or some form of civil liability) might provide a partial solution.  Their projects, presentations, and white papers were informed significantly by the mind sciences. You can read more about those projects, view the presentations, and download the white papers at the Frontier Torts website.

One of the group projects involved the sale of alcohol to members of the Oglala Sioux in Whiteclay Nebraska outside the Pine Ridge Indian Reservation. Here’s the Executive Summary of the white paper.

Native American Alcoholism: A Frontier Tort

Executive Summary


Since its introduction into Native American communities by European colonists, alcohol has plagued the members of many tribes to a disastrous extent. The Oglala Sioux of Pine Ridge have especially suffered from alcoholism, enabled and encouraged by liquor stores just outside the reservation’s borders. Despite the complexities of this situation, media outlets have often reduced it to a pitiable image of dirty, poor Native Americans, degraded by the white man’s vice.

Upon further analysis, however, it becomes evident that there are a variety of factors influencing the situation of Native American alcoholism. While neurobiological, psychological, and genetic factors are often thought to offer plausible internal situational explanations as to why Native Americans suffer so much more potently from this disease than the rest of the nation, high levels of poverty in Native American communities, a traumatic and violent history, and informational issues compound as external situational factors that exacerbate the problem.

Unfortunately, the three major stakeholders in this situation (the alcohol industry, the State of Nebraska, and the Native Americans) have conflicting interests, tactics, and attribution modes that clash significantly in ways that have prevented any meaningful resolution from being reached. However, there are a variety of federal, state, and tribal programs and initiatives that could potentially resolve this issue in a practical way, so long as all key players agree to participate in a meaningful, collaborative effort.

The key to implementation of these policy actions is determining who should bear the costs they require: society as a whole through the traditional federal taxes, the alcohol companies through tort litigation, or the individuals who purchase the alcohol through an alcohol sales tax. Ultimately, an economic analysis leads to the conclusion that liability should be placed upon the alcohol companies and tort litigation damages should fund the suggested policy initiatives.

You can watch the related presentations and download the white paper here.

Related Situationist posts:

Posted in Deep Capture, Food and Drug Law, History, Marketing, Morality, Neuroscience, Politics, Situationist Contributors | Leave a Comment »

Max Bazerman Speaks at HLS – Thursday!

Posted by The Situationist Staff on February 7, 2013

Bazerman Books

Thursday, February 7, 12-1 p.m.
Wasserstein 1015
Professor Max Bazerman (HBS)
“Bounded Ethicality”
Sponsor: Student Association for Law & Mind Sciences

Professor Bazerman will present his recent research on ethical behavior. He argues that, in contrast to the search for the few “bad apples,” the majority of unethical events occur as the result of ordinary and predictable psychological processes. As a result, even good people engage in unethical behavior, without their own awareness, on a regular basis.

Free Thai food!

Learn more about Professor Bazerman’s work here.

Related Situationist posts:

Posted in Choice Myth, Events, Morality, SALMS, Social Psychology | Leave a Comment »

The Situation of Fraudulent Social Science

Posted by The Situationist Staff on December 2, 2012

Stapel

Press Release from Tilburg University:

A culture permeated by ‘flawed science’ surrounded social psychologist Diederik Stapel. This is one reason why his academic misconduct went undetected for so long. The investigation into his practices and the discussion that followed have served as a catalyst for positive change, however. The fraud case has raised international awareness of the importance of scientific integrity. The discussion is now focusing more than ever on replication, data archiving and the general research culture.

This is the conclusion of the Levelt, Noort and Drenth Committees as published in their joint final report on the Stapel case. The report was presented to the Rectors of the universities concerned on November 28. The Committees investigated the periods during which Stapel committed scientific fraud and the publications involved. The Committees identified 55 publications in which it is certain that Stapel committed fraud during his time in Groningen and Tilburg. In addition, eleven older publications by Stapel published when he worked in Amsterdam and Groningen show indications of fraud. The earliest dates from 1996. A total of ten doctoral dissertations supervised by Stapel are ‘contaminated’ (seven in Groningen and three from recent years in Tilburg).

Although Stapel is fully and solely responsible for this extensive case of academic fraud, the Committees are also critical of the research culture in which this academic misconduct was allowed to go undetected. The Committees describe this as “a general culture of careless, selective and uncritical handling of research and data.” They conclude that “…from the bottom to the top there was a general neglect of fundamental scientific standards and methodological requirements.” The Committees point the finger not only at Stapel’s peers, but also at editors and reviewers of international journals.

The three Committees received all possible assistance for their investigation. They conclude that the discussion surrounding the case has led to a series of measures to prevent academic fraud and to investigate suspicions of fraud more effectively. “By establishing committees and issuing reports, organizations such as KNAW (Royal Netherlands Academy of Arts and Sciences), VSNU (Association of Universities in the Netherlands), and the European Federation of Academies of Sciences and Humanities (ALLEA) all have contributed to the debate about breaches of scientific integrity and their prevention,” according to the Committees. The recommendations presented by the Schuyt Committee (KNAW) similarly contribute to promoting scientific integrity.

In Stapel’s field, Social Psychology, many initiatives have already been taken to improve research practices. For example, the Association of Social Psychological Researchers ASPO is very active in the field of continuing education, data storage and replication.

An English translation of the final report ‘Flawed Science’ by the Levelt, Noort, and Drenth committees is avaliable online (pdf).

Related Situationist posts:

Posted in Morality, Social Psychology | 3 Comments »

Revisiting Milgram and Zimbardo’s Studies

Posted by Adam Benforado on November 23, 2012

A new essay in PLOS Biology returns to the path-breaking research of Stanley Milgram and Situationist Contributor Phil  Zimbardo and asks whether the studies demonstrate the power of blind conformity or something else.  In particular, the authors, Alex Haslam and Stephen Reicher, are interested in the possibility that social identification might be driving the dynamic.  As Haslam explains, “Decent people participate in horrific acts not because they become passive, mindless functionaries who do not know what they are doing, but rather because they come to believe — typically under the influence of those in authority — that what they are doing is right.”

Here is the abstract of the paper:

Understanding of the psychology of tyranny is dominated by classic studies from the 1960s and 1970s: Milgram’s research on obedience to authority and Zimbardo’s Stanford Prison Experiment. Supporting popular notions of the banality of evil, this research has been taken to show that people conform passively and unthinkingly to both the instructions and the roles that authorities provide, however malevolent these may be. Recently, though, this consensus has been challenged by empirical work informed by social identity theorizing. This suggests that individuals’ willingness to follow authorities is conditional on identification with the authority in question and an associated belief that the authority is right.

Related Situationist posts:

Posted in Abstracts, Ideology, Morality, Situationist Contributors, Social Psychology | 3 Comments »

The Good, the Bad, and the Baby

Posted by The Situationist Staff on November 19, 2012

From 60 Minutes:

The above video is from “The Baby Lab” which aired on Nov. 18, 2012.

Related Situationist posts:

Posted in Altruism, Evolutionary Psychology, Morality, Video | Leave a Comment »

Sunstein on Motivated Judicial Reasoning

Posted by The Situationist Staff on October 23, 2012

From Bloomberg (an op-ed by Harvard Law School’s Cass Sunstein):

In the context of affirmative action, some of the nation’s most important and distinguished conservative legal thinkers, including Justices Antonin Scalia and Clarence Thomas, appear to have abandoned their own deepest beliefs about how to interpret the Constitution.

Unfortunately, this is not the only area in which they have done so. To appreciate the problem, we have to step back a bit.

For at least 25 years, there has been a clear division between leading conservatives and liberals with respect to constitutional interpretation. Conservatives have tended to favor “originalism” — the view that the meaning of the Constitution is fixed by the original understanding of its provisions at the time they were ratified.

Liberals have tended to reject originalism. They contend that the Constitution establishes broad principles whose specific meaning changes over time and that must, in the words of the influential legal theorist Ronald Dworkin, be given a “moral reading.”

Consider debates over the right to choose abortion and to engage in sexual relationships with people of the same gender. Many conservatives insist, rightly and to their credit, that our moral judgments must be separated from our judgments about the meaning of the Constitution. They go on to argue that if no provision of the Constitution was understood to protect these rights when it was ratified, then none protects these rights today.

* * *

Just this month, Justice Scalia put the point unambiguously: “Abortion? Absolutely easy. Nobody ever thought the Constitution prevented restrictions on abortion. Homosexual sodomy? Come on. For 200 years, it was criminal in every state.” By contrast, liberals have urged that the meaning of the Constitution’s broad principles evolves, and that judges can legitimately help shape the evolution.

Last week, the Supreme Court heard oral arguments involving the constitutionality of an affirmative-action policy at the University of Texas. Here is the great paradox: None of the conservative justices asked a single question about whether affirmative-action programs are consistent with the original meaning of any provision of the Constitution.

This failure to consider history is long-standing. Justices Scalia and Thomas, the court’s leading “originalists,” have consistently argued that the Constitution requires colorblindness. But neither of them has devoted so much as a paragraph to the original understanding. As conservative Ramesh Ponnuru, liberal Adam Winkler and others have suggested, their silence is especially puzzling because for decades, well-known historical work has strongly suggested that when passed by Congress in 1866 and ratified by the states in 1868, the 14th Amendment did not compel colorblindness.

Perhaps the most important evidence is the Freedmen’s Bureau Act of 1866, which specifically authorized the use of federal funds to provide educational and other benefits to African-Americans. Opponents of the act (including President Andrew Johnson) explicitly objected to the violation of colorblindness, in the form of special treatment along racial lines. In fact, much of the congressional debate involved colorblindness. Along with many others, Representative Ignatius Donnelly of Minnesota gave what the strong majority of Congress saw as a decisive response: “We have liberated four million slaves in the South. It is proposed by some that we stop right here and do nothing more. Such a course would be a cruel mockery.”

As law professor Eric Schnapper has shown, the 1866 Freedmen’s Bureau Act was one of several race-conscious measures enacted in the same period during which the nation ratified the 14th Amendment — which is now being invoked to challenge affirmative action. If Congress enacted race-conscious measures in the same year that it passed that amendment, and just two years before the nation ratified it, we should ask: Isn’t it clear that the 14th Amendment doesn’t require colorblindness?

* * *

Maybe this question can be answered. Maybe current affirmative-action programs, including the one at the University of Texas, are meaningfully different from the measures enacted by Congress after the Civil War. But to invalidate current programs, constitutional originalists have to say more. They must show that such programs are fatally inconsistent with the original understanding. Maybe they can do this, but remarkably, they haven’t even tried.

How can we explain this conspicuous lack of historical curiosity? . . . .

To read the entire article, including Sunstein’s answer to that question, click here.

Related Situationist posts:

Posted in Book, Morality, Social Psychology | Leave a Comment »

The Situation of Not Helping

Posted by The Situationist Staff on October 21, 2012

From Youtube:

A man tries to help a woman being attacked, but instead he is stabbed and left to die in the streets of New York. As Paul Johnson reports, over 20 people pass the dying man and do nothing to help. A look at the various cases in the U.S. and Canada where bystanders could have saved people but chose to look the other way.

In the video below, Situationist Contributor, Philip Zimbardo describes the bystander effect and introduces an excellent series of demonstrations of the effect.

Related Situationist posts:

Posted in Morality, Social Psychology, Video | Leave a Comment »

The Situation of Libertarianism

Posted by The Situationist Staff on September 6, 2012

Situationist Contributor Peter Ditto and co-authors (Iyer, R., Koleva, S., Graham, J., & Haidt, J.) have recently published their article, “Understanding libertarian morality: The psychological dispositions of self-identified libertarians” on PLoS ONE.  Here’s the abstract:

Libertarians are an increasingly prominent ideological group in U.S. politics, yet they have been largely unstudied. Across 16 measures in a large web-based sample that included 11,994 self-identified libertarians, we sought to understand the moral and psychological characteristics of self-described libertarians. Based on an intuitionist view of moral judgment, we focused on the underlying affective and cognitive dispositions that accompany this unique worldview. Compared to self-identified liberals and conservatives, libertarians showed 1) stronger endorsement of individual liberty as their foremost guiding principle, and weaker endorsement of all other moral principles; 2) a relatively cerebral as opposed to emotional cognitive style; and 3) lower interdependence and social relatedness. As predicted by intuitionist theories concerning the origins of moral reasoning, libertarian values showed convergent relationships with libertarian emotional dispositions and social preferences. Our findings add to a growing recognition of the role of personality differences in the organization of political attitudes.

Download the pdf of the article here.

Related Situationist posts.

Posted in Abstracts, Ideology, Morality, Situationist Contributors | Leave a Comment »

 
%d bloggers like this: