By now, many folks are familiar with the implicit social cognition work of Anthony Greenwald and Situationist contributors Mahzarin Banaji and Brian Nosek. The concept of “implicit bias”, which can be measured by reaction time instruments such as the implicit association test (IAT), has already had a substantial impact on the way that we think about race, racism, and race relations. The data reveal that we generally have more biases than we think we do.
Predictably, a backlash of sorts is forming against this work — after all, it’s always disturbing to think that we are not so colorblind or gender-blind after all. But certain complaints are slightly surprising, especially given where they are coming from.
Ford and Thought Police
To take one recent example, Stanford law professor Richard Ford wrote recently in a Slate article titled “The Irrelevance of Soft Bigotry” that:
Joe Biden got his presidential campaign off on the wrong foot this week when he called one of his competitors for the Democratic nomination, Barack Obama, “the first mainstream African-American who is articulate and bright and clean and a nice-looking guy.” . . .
So, is Biden a bigot, and should we care in the context of his presidential candidacy (assuming it survives)? Federal laws offer a rationale for concluding that the answer is, not much.
Civil rights law doesn’t prohibit racism by employers. It prohibits discrimination on the basis of race—tangible actions taken by employers, not their bad attitudes. . . .
Yet there are plenty of new tools for the thought police. For instance, the Implicit Association Test, developed by psychologists Mahzarin Banaji and Anthony Greenwald, seeks to identify not just hidden biases, but even unconscious biases.
There’s a lot to respond to here. But for now, I want to focus on only one point– the suggestion that implicit bias researchers are interested in thought control. That is nonsense, a strawman. It is Psychology 101 to separate out mental states from action. For lawyers not familiar with the standard typology, psychologists tend to distinguish attitudes from beliefs, sometimes calling negative attitudes “prejudice” and negative beliefs “stereotypes.” More important, they distinguish both attitudes and beliefs from action based on those mental constructs, e.g., “discrimination.” So, when Ford writes that we should be focusing on behavior as most important, no one is in disagreement. It is misleading to suggest otherwise. This is precisely why so many scientific resources are being poured into predictive validty and malleability studies.
Patterson and Authenticity
Here’s another example from Orlando Patterson, sociology professor at Harvard. In an article titled “Our Overrated Inner Self,” in the New York Times, he suggests that we are too absorbed with an “authentic” self. And that implicit bias scientists are pursuing a “gotcha psychology” that contends that implicit measures are the only “authentic” measures that should matter.
Again, a strawman. No responsible implicit bias scholar contends that implicit bias measures are somehow the only real, true attitudes and beliefs. (This interest in “true” or “authentic” also sounds a bit dispositionist, which is another issue.) For example, predictive validity studies suggest that explicit self-reports better predict behavior generally, but in domains influenced by negative attitudes and beliefs, implicit bias measures outperform as predictors.
At bottom, this strawman may be an odd sort of projection. This criticism might come from those who believe that explicit self-reports alone measure “true” attitudes and beliefs, that these reports are the only “true” predictors of behavior, and that these reports are the only “true” bases for moral, social, and legal evaluation. I doubt any serious psychologist or legal scholar thinks this, and the best science suggests that our lives, minds, and behaviors are much too complicated for such a simple diagnosis. Privileging explicit self-reports–even if entirely sincere–as the only thing that matters is not warranted by the best evidence we have. We need to be more behaviorally realistic (I hope to post soon about “behavioral realism”).
A Call for Care and Self-Critical Engagement
Talking about race and justice are difficult enough without strawmen being built, then torn down. I have always been leery of blogs because they can encourage soundbites and oversimplifications. I’m trying to do just the opposite here. On matters of implicit bias, there’s lots to learn and think through on both empirical and normative fronts. Further, most of the interesting questions are genuinely difficult, without the strawmen. My request among all scientists, legal academics, lawyers, and policy makers working in this domain is to earnestly avoid strawmen.
And if I’ve been guilty of the same in my work (see, e.g., Trojan Horses of Race, Harvard 2005; Fair Measures, California 2006 (with M. Banaji))—and given what Yale psychologist David Armor has shown us about the illusion of objectivity, I’m sure I have—point it out, and I’ll own up to it, as well as commit to avoiding it in the future.