Ronald Bailey has a column in Reason where he describes the results of the paper Affect, Values, and Nanotechnology Risk Perceptions by Dan M. Kahan, Paul Slovic, Donald Braman, John Gastil, Geoffrey L. Cohen. The conclusion is that views on risks of nanotechnology are readily elicited even when people know that they do not know much about the subject and these views become strengthened along ideological lines by more facts. Facts do not matter as much as values: people appear to make a quick gut feeling decision (probably by looking at the word "technology"), which is then shaped by their ideological outlook. Individualists tend to see the risks as smaller than communitarians. There are similar studies showing the same thing about biotechnology, and in my experience the same thing happens when the public gets exposed to discussions about human enhancement.
The authors claim that this result does not fit with "rational weigher" models where people try to maximize their utility, nor with "irrational weigher" models where cognitive biases and bounded rationality dominates. Rational individualists and communitarians ought not differ on their risk evaluations, and the authors claim it is unlikely that different cultural backgrounds would cause differing biases. They suggest a "cultural weigher" model where individuals don’t simply weigh risks, but rather evaluate what one position or another on those risks will signify about how society should be organized. When people learn about nanotechnology or something similar, they do not update instrumental risk probabilities but develop a position with respect to the technology that will best express their cultural identities.
This does not bode well for public deliberations on new technologies (or political decisions on them), since it seems to suggest that the only thing that will be achieved in the deliberations is a fuller understanding of how to express already decided cultural/ideological identities in regards to the technology. It does suggest that storytelling around technologies, in particular stories about how they will fit various social projects, will have much more impact than commonly believed. Not very good for a rational discussion or decision-making, unless we can find ways of removing the cultural/ideological assumptions of participants, which is probably pretty hard work in deliberations and impossible in public decisionmaking.
The discussion here is too far away from the topic of the post. I'm sure there are plenty of other places to have these other debates.
Kevembuangga: "Because intellectual honesty is a prerequisite to avoid dead ends in iterated prisonners dilemma games.Iterated prisonners dilemma games are the rational way to establish trust between competing parties.A "dishonest" move cripple the game (thru decreased trust) for quite a while."
Establishing trust between competing parties is only valuable to the degree that it optimizes our mutual persistence odds, right?