Years ago, I spent a big chunk of my intellectual career studying the rationality of disagreement, mostly via math modeling, but also some lab experiments. My main conclusion was that, for the purpose of accurate beliefs, it seems both desirable and feasible for people to not knowingly disagree (on facts). That is, people should not be able to estimate the sign of how someone else’s future opinion will differ from their own current opinion.
I concluded that humans actually disagree more for signaling reasons. For example, being visibly persuaded by someone is widely seen as bowing to their higher status. This was one of the results that moved me to think more generally about hidden motives, such as we describe in our book The Elephant in the Brain: Hidden Motives in Everyday Life.
But I think about the subject often, and I have to admit now that lately a different explanatory factor has stood out: just how hard it is to get into someone else’s head. Let me explain.
Imagine that you face a big important decision that depends mainly in a single estimate, call it X. In this case, it makes a lot of sense to put substantial weight on estimates of X that you get from many different sources that you respect. You would of course discuss the topic as best you could before your decision deadline, but if X estimates continued to vary at deadline, in your decision you wouldn’t want to give too much extra weight to your own estimates, just because they were yours. You might know that you can make mistakes as easily as others.
However most of the estimates X that we discuss in our conversations are far more removed from important concrete decisions. We care about them more because of the further conclusions that we might draw from them, and from the checks and signals they offer to help us rate and improve our thinkings systems. In which case, someone else’s opinion on X is mostly useful to your thoughts in giving you hints about what to consider, and in helping you to score and refine your thinking systems. The question is: how exactly can we make use of others opinions?
Notice: it is actually hard to ever make use of anyone else’s thinking on anything. As a professor I can tell you that the most painful part of my job is reading student essays. Even though students are trying hard to make themselves understood by a person who they’ve been listening carefully to for an entire semester, and on a topic chosen to make this communication easier, even so it is usually quite a struggle to dimly understand the median student essay.
Out on social media, such as on Twitter or in blog comment sections, a large fraction of posts are largely incomprehensible, and so badly so that there’s little point in trying to ask for clarification. This holds even for those written by college graduates. Academics form disciplines and schools of thought, with standard terms, methods, and training, primarily to make it much easier for them to understand each other.
Most of us have a decent chance of understanding our closest associates, but because we’ve known them for a long time, have much shared background, and usually stick to pretty simple topics. The ability to have fluid deep widely-varying conversations with associates is a rare treasured ability.
The main criteria by which public intellectuals are selected, by far, is their ability to create an inviting mental space for readers. When writing works well, readers enter a mind that seems simple, inviting, and easy to relate to. Each sentence invites few possible interpretations, and the structure of arguments are made hard to miss. Achieving all this is hard work, and even the authors who can do so in their essays achieve far less in their informal conversations.
So if we turn our attention now back to situations where other random people have a differing opinion to us regarding some random estimate X, we can see how hard it can be to make practical use of that. Sure, if we are about to take an action that directly depends on X, we can include their estimate in our weighted average of known estimates. But if not, then we face the challenge of what exactly to do.
Our minds are complex systems that automatically give us output estimates X on a great many topics. They are all set up to automatically change all our estimates in response to a standard set of inputs such as new sense perceptions and new abstract theories. And they can give us estimates on most any questions we ask, and even give counterfactual answers about what we’d think if we accepted hypothesized perceptions or theories. Our minds do most of this quickly, smoothly, and out of our view.
But, alas, our minds don’t seem to be set up to easily take others’ estimates on various random X as standard input. When others can show us their reasoning in enough local detail, we can often assimilate that reasoning into our thoughts, and thus their conclusions as well. When it works, this is the magic of conversation. But when we just see estimates without supporting inputs, we struggle to guess what inputs might have let them to these conclusions. Sometimes we can make good guesses, but quite often we cannot.
So this is a plausible explanation for much human disagreement: while we can just simply put weight on others’ opinions re our decisions that are close enough to those opinions, we just don’t know how to update our mental systems to take their opaque opinions into account more generally. Our minds aren’t set up to take those as standard inputs. It is just too hard to search in the space of all possible ways their minds could have come to such conclusions. While we can and do take their opinions as hints re what arguments and evidence to seek and consider, we find it hard to integrate their mere opinions deeply into our thinking.
We might want to agree, and can do so awkwardly in particular cases, but we can’t flexibly and fluidly integrate opinions with opaque sources into our thoughts.
Yes, the key to useful disagreement and discussion is to minimize opacity. The more carefully and preciscely terms are laid out and the less room for interpretive disagreement the more chance of learning from each other and getting to the truth better.
This was why I reacted so negatively to Girad and his approach. It's not that his ideas might not be good or correct -- maybe their substantially above average -- but his approach to argument and presentation essentially maximizes opacity.
And I think that's what trips people up about the value of very sorta vague and metaphorical (what some people call continental style) argumentation. The reason it's unhelpful isn't that the ideas are bad or week. It's that no one else can really benefit when they are highly opaque so he might as well just have sketched a 3 paragraph blog post saying hey maybe this as written an extensive academic style book.
I think a major part comes down to what sources you trust. Say the argument is about whether gun control decreases or increases crime. I cite study X saying it decreases crime, the person I'm arguing with cites study Y. Neither of have the experience with statistics or study methodology to actually determine which study was better done. What do we do? We turn to people or institutions we trust and respect, and see what they say on it. Usually I personally turn to Scott Alexander and try to see if he has an opinion on the topic, another person might turn to Fox News, or Contrapoints, or their college professor.
Why do I trust Scott Alexander?
a) A lot of smart and respected people like economists and angel investors trust and respect him
b) His logical arguments on topics like morality, that don't require any statistics and are comprehensible in their entirety to me, make sense to me, so I extend some trust to him when speaks on topics that aren't fully comprehensible to me
c) His evidence, to the degree that I am able to parse it, does make sense and looks credible
c) I've looked into people who try to debunk his arguments and they fail to convince me, because they're usually the reverse of a) and b). His detractors usually don't have as many respected and trusted people respecting and trusting them. And their logical arguments don't make as much sense to me(e.g they might there is an intrinsic value in bodily integrity when arguing about kidney donation). Notably their c) category arguments that rely on studies and statistics and other forms of evidence like anecdotes might be just as convincing as Scott's, and I cannot easily determine who's more credible for myself just by trying to look at who has the better interpretation of the data.