The usual party chat rule says to not spend too long on any one topic, but instead to flit among topics unpredictably. Many thinkers also seem to follow a rule where if they think about a topic and then write up an opinion, they are done and don’t need to ever revisit the topic again. In contrast, I have great patience for returning again and again to the most important topics, even if they seem crazy hard. And for spending a lot time on each topic, even if I’m at a party.
A long while ago I spend years studying the rationality of disagreement, though I haven’t thought much about it lately. But rereading Yudkowsky’s Inadequate Equilibria recently inspires me to return to the topic. And I think I have a new take to report: unusual for me, I adopt a mixed intermediate position.
This topic forces one to try to choose between two opposing but persuasive sets of arguments. On the one side there is formal theory, to which I’ve contributed, which says that rational agents with different information and calculation strategies can’t have a common belief in, nor an ability to foresee, the sign of the difference in their opinions on any “random variable”. (That is, a parameter that can be different in each different state of the world.) For example, they can’t say “I expect your next estimate of the chance of rain here tomorrow to be higher than the estimate I just now told you.”
Yes, this requires that they’d have the same ignorant expectations given a common belief that they both knew nothing. (That is, the same “priors”.) And they must be listening to and taking seriously what the other says. But these seem reasonable assumptions.
An informal version of the argument asks you to imagine that you and someone similarly smart, thoughtful, and qualified each become aware that your independent thoughts and analyses on some question had come to substantially different conclusions. Yes, you might know things that they do not, but they may also know things that you do not. So as you discuss the topic and respond to each others’ arguments, you should expect to on average come to more similar opinions near some more intermediate conclusion. Neither has a good reason to prefer your initial analysis over the others’.
Yes, maybe you will discover that you just have a lot more relevant info and analysis. But if they see that, they should then defer more to you, as you would if you learned that they are more expert than you. And if you realized that you were more at risk of being proud and stubborn, that should tell you to reconsider your position and become more open to their arguments.
According to this theory, if you actually end up with common knowledge of or an ability to foresee differences of opinion, then at least one of you must be failing to satisfy the theory assumptions. At least one of you is not listening enough to, and taking seriously enough, the opinions of the other. Someone is being stubbornly irrational.
Okay, perhaps you are both afflicted by pride, stubbornness, partisanship, and biases of various sorts. What then?
You may find it much easier to identify more biases in them than you can find in yourself. You might even be able to verify that you suffer less from each of the biases that you suspect in them. And that you are also better able to pass specific intelligence, rationality, and knowledge tests of which you are fond. Even so, isn’t that roughly what you should expect even if the two of you were similarly biased, but just in different ways? On what basis can you reasonably conclude that you are less biased, even if stubborn, and so should stick more to your guns?
A key test is: do you in fact reliably defer to most others who can pass more of your tests, and who seem even smarter and more knowledgeable than you? If not, maybe you should admit that you typically suffer from accuracy-compromising stubbornness and pride, and so for accuracy purposes should listen a lot more to others. Even if you are listening about the right amount for other purposes.
Note that in a world where many others have widely differing opinions, it is simply not possible to agree with them all. The best that could be expected from a rational agent is to not consistently disagree with some average across them all, some average with appropriate weights for knowledge, intelligence, stubbornness, rationality, etc. But even our best people seem to consistently violate this standard.
All that we’ve discussed so far has been regarding just one of the two opposing but persuasive sets of arguments I mentioned. The other argument set centers around some examples where disagreement seems pretty reasonable. For example, fifteen years ago I said to “disagree with suicide rock”. A rock painted with words to pretend it was a sentient creature listening carefully to your words, but offering no evidence that it actually listened, should be treated like a simple painted rock. In that case, you have strong evidence to down-weight its claims.
A second example involves sleep. While we are sleeping we don’t usually have an opinion on if we are sleeping, as that issue doesn’t occur to us. But if the subject does come up, we often mistakenly assume that we are awake. Yet a person who is actually awake can have high confidence in that fact; they can know that while a dreaming mind is seriously broken, their mind is not so broken.
An application to disagreement comes when my wife awakes in the night, hears me snoring, and tells me that I’m snoring and should turn my head. Responding half asleep, I often deny that I’m snoring, as I then don’t remember hearing myself snore recently, and I assume that I’d hear such a thing. In this case, if my wife is in fact awake, she can comfortably disagree with me. She can be pretty sure that she did hear me snore and that I’m just less reliable due to being only half awake.
Yudkowsky uses a third example, which I also find persuasive, but at which many of you will balk. That is the majority of people who say they have direct personal evidence for God or other supernatural powers. Evidence that’s mainly in their feelings and minds, or in subtle patterns in how their personal life outcomes are correlated with their prayers and sins. Even though most people claim to believe in God, and point to this sort of evidence, Yudkowsky and I think that we can pretty confidently say that this evidence just isn’t strong enough to support that conclusion. Just as we can similarly say that personal anecdotes are usually insufficient to support the usual confidence in the health value of modern medicine.
Sure, its hard to say with much confidence that there isn’t a huge smart power somewhere out there in the universe. And yes, if this power did more obvious stuff here on Earth back in the day, that might have left a trail of testimony and other evidence, to which advocates might point. But there’s just no way that either of those considerations can remotely support the usual level of widespread confidence in a God meddling in detail with their heads and lives.
The most straightforward explanation I can see here is social desirability bias a bias that not only introduces predictable errors but also one’s willingness to notice and correlate such errors. By attributing their belief to “faith”, many of them do seem to acknowledge quite directly that their argument won’t stand up to the usual evaluation standards. They are instead believing because they want to believe. Because their social world rewards them for the “courage” and “affirmation” of such a belief.
And that pretty closely fits a social desirability bias. Their minds have turned off their rationality on this topic, and are not willing to consider the evidence I’d present, or the fact that the smartest most accomplished intellectuals today tend to be atheists. Much like the sleeper who just can’t or won’t see that their mind is broken and unable to notice that they are asleep.
In fact, it seems to me that this scenario matches a great many of the disagreements I’m willing to have with others. As I tend to be willing to consider hypotheses that others find distasteful or low status. Many people tell me that the pictures I paint in my two books are ugly, disrespectful, and demotivating, but far fewer offer any opposing concrete evidence. Even though most people seem able to notice the fact that social desirability would tend to make them less willing to consider such hypotheses, they just don’t want to go there.
Yes, there is an opposite problem: many people are especially attracted to socially undesirable hypotheses. A minority of folks see themselves as courageous “freethinkers” who by rights should be celebrated for their willingness to “think outside the box” and embrace a large fraction of the contrarian hypotheses that come their way. Alas, by being insufficiently picky about the contrarian stories they embrace, they encourage, not discourage, everyone else to embrace social desirability biases. On average, social desirability only causes modest biases in the social consensus, and thus only justifies modest disagreements from those who are especially rational. Going all in on a great many contrarian takes at once is a sign of an opposite problem.
Yes, the stance I’m taking implies that contrarian views, i.e., views that seem socially undesirable to embrace, are on average neglected, and thus more likely than the consensus is willing to acknowledge. But that is of course far from endorsing most of them with high confidence. For example, UFOs as aliens are indeed more likely than the usual prestigious consensus will admit, but could still be pretty unlikely. And assigning a somewhat higher chance to claims like that the moon landings were faked it is not at all the same as endorsing such claims.
So here’s my new take on the rationality of disagreement. When you have a similar level of expertise to others, you can justify disagreeing with an apparent social consensus only if you can identity a particularly strong way that the minds of most of those who think about the topic tend to get broken by the topic. Such as due to being asleep or suffering from a strong social desirability bias. (A few weak clues won’t do.)
I see this position as mildly supported by polls showing that people think that those in certain emotional states are less likely to be accurate in the context of a disagreement; different emotions plausibly trigger different degrees of willingness to be fair or rational. (Here are some other poll results on what people think predicts who is right in a disagreement.)
But beware of going too wild embracing most socially undesirable views. And you can’t just in general presume that others disagree with each of your many positions due to their minds being broken in some way that you can’t yet see. That way lies unjustified arrogance. You instead want specific concrete evidence of strongly broken minds.
Imagine that you specialize in a topic so much that you know nearly as much as the person in the world who knows the most, but do not have the sort of credentials or ways to prove your views that the world would easily accept. And this is not the sort of topic where insight can be quickly and easily translated into big wins, wins in either money or status. So if others had come to your conclusions before, they would not have gained much personally, nor found easy ways to persuade many others.
In this sort of case, I think you should feel more free to disagree. Though you should respect base rates, and try to test your views as fast and strongly as possible. As the world is just not listening to you, you can’t expect them yet to credit what you know. Just also don’t expect the world to reward you or pay you much attention, even if you are right.
Right - except that the differing goals could be genetic and do not need to have anything to do with social behavior. The assumptions the disagreement theory Robin refers to rest on shared goals. However people don't necessarily have shared goals. In practice, their goals conflict. Alice might favor donating funds to her daughter, Ann, while Betty might favor the same funds being given to her son, Bob. Even two "truth seekers" can have conflicting goals. One truth seeker might value the truth about physics, and another truth seeker might value the truth about chemistry more highly. Goal differences of any kind lead down the same path towards disagreement.
"because their social world rewards them" above would seem to fit what you're talking about.