Emotions, Values, and Information: The Future of Nanotechnology
Posted by Dan Kahan on April 4, 2007
If you either answered “yes” to the first question, or assumed answering “no” disqualified you from offering an opinion on the second, you are unusual. We (Paul Slovic, Don Braman, John Gastil, Geoffrey Cohen and I), recently did a national study of nanotechnology risk perceptions. We discovered that although 80% of our subjects reported having heard “little” or “nothing” about nanotechnology, 90% of them (without being supplied any additional information) still had a view one way or the other on whether it was on net risky or beneficial.
But in truth, that result didn’t surprise us. It’s well known that people form rapid, intuitive judgments about even unfamiliar risks. Our primary goal in the study was to find out on what basis they would form such judgments toward nanotechnology and, even more significant, how their views would be influenced by the provision of information about this novel science.
What we discovered convinced us that nanotechnology is an emotionally charged topic that is poised to generate exactly the sort of political conflict that has historically attended nuclear power and today characterizes global warming.
The study involved a demographically diverse sample of 1,850 persons. In addition to our subjects’ views of nanotechnology risks and benefits, we also collected data on various individual characteristics that we hypothesized might explain those views.
The study (the results of which are set forth more completely in a working paper) generated two principle findings. The first is that existing reactions to nanotechnology are affect driven. The sign (positive or negative) and intensity of subjects’ visceral or emotional reactions toward nanotechnology explained eight times as much of the variance in their perceptions of its risks as did either gender or race. The impact of affect was approximately seven times larger than the impact of confidence in government to regulate risks effectively, six times larger than the impact of education, and four times larger than the impact of perception of other environmental risks. The next biggest influence — how much subjects reported knowing about nanotechnology before the study — was less than half that of affect.
This finding, of course, begs the question, What explains variance in affect? A variety of things, we found, but among the strongest predictors of our subjects’ affective response to nanotechnology was their perceptions of other environmental risks, such as nuclear power and global warming. In sum, the subjects in our study seemed to have a gut reaction to nanotechnology, a relatively novel risk, that was informed by their attitudes toward more familiar environmental dangers.
The second major finding had to do with what happens when individuals learn more about nanotechnology. To address this issue, we divided our sample into two and furnished one with additional information about nanotechnology before eliciting their views. That information consisted of two, relatively short paragraphs, one setting forth potential benefits of nanotechnology and other potential risks. We then compared the views of subjects who received this information to those who didn’t receive any.
Overall, there was no difference in the views of our “no information” and our “information exposed” subjects on the relative risks and benefits of nanotechnology. Again, perfectly predictable, given the balanced nature of the information we supplied.
But when we examined the views of subgroups of respondents defined with reference to their values, we discovered something much more interesting: polarization of our subjects along cultural and ideological lines.
The theory of “cultural cognition” posits that individuals process information in a way that reflects and reinforces their general preferences about how society should be organized. Egalitarians and communitarians, for example, tend to be sensitive to claims of environmental and technological risks because abating such dangers justifies regulating commercial activities that generate inequality and legitimize unconstrained pursuit of self-interest. Individualists, in contrast, tend to be skeptical about such risks, in line with their concern to ward off contraction of the sphere of individual initiative. So do hierarchists, who tend to see assertions of environmental technological risks as challenging the competence of governmental and social elites.We evaluated our subjects’ worldviews using scales that correspond to these cultural worldviews.
In our “no information” condition, hierarchists and egalitarians, individualists and communitarians all had roughly comparable perceptions of nanotechnology risks. However, in the “information exposure” condition, subjects adopted toward nanotechnology the clashing positions persons with their respective worldviews take on environmental risks generally.
Exposure to information also seemed to excite recognizable ideological divisions. Liberals, who held a slightly more positive view of nanotechnology among the subjects in the “no information” condition, actually traded places with conservatives in the “information exposure” conduction, assuming a stance of risk concern more characteristic of their ideology.
In sum, values operated as a powerful heuristic for our subjects. Confronted with balanced competing arguments about a novel risk, they assigned more weight to the position that best fit their general cultural and political predispositions.
Does this mean that public deliberations on nanotechnology will be plagued by division and acrimony? That’s certainly a possibility. In particular, it certainly can’t be assumed that the discovery of scientifically accurate information about the risks and benefits of nanotechnology will of its own force generate societal consensus on whether and how its development should be regulated: as they do on many well-known risks — from climate change to nuclear power to handgun possession to terrorism — people with different values are predisposed to draw different factual conclusions from the same information. If anything, the polarization effects we observed in our study could be even larger in the real world, where individuals are likely to select information sources that fit their values and that supply them with information systematically skewed toward one position or other.
But I, at least, don’t think such polarization on nanotechnology is inevitable. At the same time that the study of cultural cognition is generating insights into how values shape individuals’ processing of information, it is also teaching us lessons (ones I will describe in future posts) about how information can be framed so that persons of diverse cultural views can get the same factual content from it. That obviously doesn’t mean those persons all reach the same conclusions on how to balance the risks and benefits of nanotechnology or other forms of science. But it does mean that their deliberations will be informed by the best understandings available of what those risks and benefits are — a condition they would presumably all agree is essential to enlightened democratic regulation of risk.
The bottom line is that those who favor informed public deliberation about nanotechnology should be neither sanguine nor bleak. Instead, they should be psychologically realistic. And if they are, they will see the urgent need for additional efforts to develop risk-communication strategies that make it possible for culturally diverse citizens to converge on policies that promote their common interests.
This entry was posted on April 4, 2007 at 9:55 pm and is filed under Cultural Cognition, Emotions, Politics, Public Policy, Social Psychology. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.