In Jan ’09 I wrote: This is now my best account of disagreement. We disagree because we explain our own conclusions via detailed context (e.g., arguments, analysis, and evidence), and others’ conclusions via coarse stable traits (e.g., demographics, interests, biases). While we know abstractly that we also have stable relevant traits, and they have detailed context, we simply assume we have taken that into account, when we have in fact done no such thing.
Khoth - Re: "it’s easy to handwave away objections by saying things like “the free market will deal with it” or “the government will choose the best option” or whatever. It’s only when you’re thinking about how things will actually work on a detailed level that you actually have to stop and think about why your handwave explanation is true"
There is some truth to that, but the problem with applying it to your free market example is that one of the major reasons to support free markets is that markets are too complex to work out how things work on a detailed level. If you really could work it out, perhaps some central planning committee with powerful computers could work out some optimum (assuming that there was an agreed criteria for optimum, and assuming people where willing to sacrifice freedom to get it), but I don't think that will ever be the case (even with those rather unsafe assumptions)
komponisto and khoth, yes, you both make sense.
Tim, should we refuse to be persuaded by you here out of a fear that your are trying to manipulate us? Is a fear of manipulation really more relevant on big important topics?
I suspect part of it is that when you're not considering details, it's easy to handwave away objections by saying things like "the free market will deal with it" or "the government will choose the best option" or whatever. It's only when you're thinking about how things will actually work on a detailed level that you actually have to stop and think about why your handwave explanation is true, and then you might actually come to realise that in this case it's not.
The evidence for a "huge mistake" doesn't seem very compelling. If people are signalling to you in a manner that makes you inclined to change your beliefs, there is a significant chance that they are trying to manipulate you in a manner which serves their interests more than it serves your own. In which case, you should often resist updating.
Do people update less than they should do? The case for that isn't clear to me. The costs of being manipulated can be high - so precaution dictates a certain lack of enthusiasm for updating. Also, there are costs to updating - changing beliefs can result in belief landslides and temporary inconsistency and stress. "Belief inertia" often has the benefit of staying in a relatively proven and safe area.
Yes there may be people and times when others’ opinions really do contain relatively little info, but most folks are far too quick to assume that this applies to them now.
On the other hand, your theory also helps explain instances of the opposite error, and we shouldn't overlook this. For example, people who would have no qualms about criticizing the decisions of a president, Congress, or the Supreme Court are often unduly reluctant to question jury verdicts in criminal cases -- despite the fact that the latter involve ordinary people without any particular training or expertise in making such judgements.
This paradox is somewhat explained by supposing that criminal cases are viewed as details unimportant to one's worldview, and thus considered in near mode -- with the result (according to this post) that folks are more deferential to the opinions of others. By contrast, larger-scale government decisions involve Big Issues, and are considered in far mode, so that people tend to see their opinions as part of their identity and are consequently more attached to their explicit reasons and less affected by disagreement.
So, even on this way of viewing things, there are two sides to the coin, and one can err in either direction.