Opacity Blocks Agreement
Years ago, I spent a big chunk of my intellectual career studying the rationality of disagreement, mostly via math modeling, but also some lab experiments. My main conclusion was that, for the purpose of accurate beliefs, it seems both desirable and feasible for people to not knowingly disagree (on facts). That is, people should not be able to estimate the sign of how someone else’s future opinion will differ from their own current opinion.
I concluded that humans actually disagree more for signaling reasons. For example, being visibly persuaded by someone is widely seen as bowing to their higher status. This was one of the results that moved me to think more generally about hidden motives, such as we describe in our book The Elephant in the Brain: Hidden Motives in Everyday Life.
But I think about the subject often, and I have to admit now that lately a different explanatory factor has stood out: just how hard it is to get into someone else’s head. Let me explain.
Imagine that you face a big important decision that depends mainly in a single estimate, call it X. In this case, it makes a lot of sense to put substantial weight on estimates of X that you get from many different sources that you respect. You would of course discuss the topic as best you could before your decision deadline, but if X estimates continued to vary at deadline, in your decision you wouldn’t want to give too much extra weight to your own estimates, just because they were yours. You might know that you can make mistakes as easily as others.
However most of the estimates X that we discuss in our conversations are far more removed from important concrete decisions. We care about them more because of the further conclusions that we might draw from them, and from the checks and signals they offer to help us rate and improve our thinkings systems. In which case, someone else’s opinion on X is mostly useful to your thoughts in giving you hints about what to consider, and in helping you to score and refine your thinking systems. The question is: how exactly can we make use of others opinions?
Notice: it is actually hard to ever make use of anyone else’s thinking on anything. As a professor I can tell you that the most painful part of my job is reading student essays. Even though students are trying hard to make themselves understood by a person who they’ve been listening carefully to for an entire semester, and on a topic chosen to make this communication easier, even so it is usually quite a struggle to dimly understand the median student essay.
Out on social media, such as on Twitter or in blog comment sections, a large fraction of posts are largely incomprehensible, and so badly so that there’s little point in trying to ask for clarification. This holds even for those written by college graduates. Academics form disciplines and schools of thought, with standard terms, methods, and training, primarily to make it much easier for them to understand each other.
Most of us have a decent chance of understanding our closest associates, but because we’ve known them for a long time, have much shared background, and usually stick to pretty simple topics. The ability to have fluid deep widely-varying conversations with associates is a rare treasured ability.
The main criteria by which public intellectuals are selected, by far, is their ability to create an inviting mental space for readers. When writing works well, readers enter a mind that seems simple, inviting, and easy to relate to. Each sentence invites few possible interpretations, and the structure of arguments are made hard to miss. Achieving all this is hard work, and even the authors who can do so in their essays achieve far less in their informal conversations.
So if we turn our attention now back to situations where other random people have a differing opinion to us regarding some random estimate X, we can see how hard it can be to make practical use of that. Sure, if we are about to take an action that directly depends on X, we can include their estimate in our weighted average of known estimates. But if not, then we face the challenge of what exactly to do.
Our minds are complex systems that automatically give us output estimates X on a great many topics. They are all set up to automatically change all our estimates in response to a standard set of inputs such as new sense perceptions and new abstract theories. And they can give us estimates on most any questions we ask, and even give counterfactual answers about what we’d think if we accepted hypothesized perceptions or theories. Our minds do most of this quickly, smoothly, and out of our view.
But, alas, our minds don’t seem to be set up to easily take others’ estimates on various random X as standard input. When others can show us their reasoning in enough local detail, we can often assimilate that reasoning into our thoughts, and thus their conclusions as well. When it works, this is the magic of conversation. But when we just see estimates without supporting inputs, we struggle to guess what inputs might have let them to these conclusions. Sometimes we can make good guesses, but quite often we cannot.
So this is a plausible explanation for much human disagreement: while we can just simply put weight on others’ opinions re our decisions that are close enough to those opinions, we just don’t know how to update our mental systems to take their opaque opinions into account more generally. Our minds aren’t set up to take those as standard inputs. It is just too hard to search in the space of all possible ways their minds could have come to such conclusions. While we can and do take their opinions as hints re what arguments and evidence to seek and consider, we find it hard to integrate their mere opinions deeply into our thinking.
We might want to agree, and can do so awkwardly in particular cases, but we can’t flexibly and fluidly integrate opinions with opaque sources into our thoughts.