24 Comments

Yes, the key to useful disagreement and discussion is to minimize opacity. The more carefully and preciscely terms are laid out and the less room for interpretive disagreement the more chance of learning from each other and getting to the truth better.

This was why I reacted so negatively to Girad and his approach. It's not that his ideas might not be good or correct -- maybe their substantially above average -- but his approach to argument and presentation essentially maximizes opacity.

And I think that's what trips people up about the value of very sorta vague and metaphorical (what some people call continental style) argumentation. The reason it's unhelpful isn't that the ideas are bad or week. It's that no one else can really benefit when they are highly opaque so he might as well just have sketched a 3 paragraph blog post saying hey maybe this as written an extensive academic style book.

Expand full comment

I think there's a lot of potential to clarify arguments by using nodes and arcs to diagram which propositions are justified in terms of which other propositions.

Expand full comment

That's a step beyond this issue. Analytic philosophy and mathematics have norms about breaking arguments into manageable inferences etc that work decently well. But if you reject the idea that you should carefully precisify each claim and define terms then all showing the logical structure does is give a false sense of rigor.

I mean it doesn't help to know that this is supposed to support that if you can't tell if it actually does support it.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

If you don't carefully precisify each claim and define terms, then showing the logical structure as a diagram reveals how rubbish your argument is. We should be able to look at any node in the diagram, look at parent nodes with arcs to that node, and say, "yes, I understand what these claims mean, and the parent nodes do reasonably support the child node." If you can't do that - throw the argument in the trash. That's part of the value of the diagram.

Expand full comment

Yes, I agree. I'm just saying in the cases I was concerned about we aren't even there yet. I don't mean to say that isn't a good idea, only that we haven't even gotten agreement on much more basic aspects necessary to make this work.

Expand full comment

I think a major part comes down to what sources you trust. Say the argument is about whether gun control decreases or increases crime. I cite study X saying it decreases crime, the person I'm arguing with cites study Y. Neither of have the experience with statistics or study methodology to actually determine which study was better done. What do we do? We turn to people or institutions we trust and respect, and see what they say on it. Usually I personally turn to Scott Alexander and try to see if he has an opinion on the topic, another person might turn to Fox News, or Contrapoints, or their college professor.

Why do I trust Scott Alexander?

a) A lot of smart and respected people like economists and angel investors trust and respect him

b) His logical arguments on topics like morality, that don't require any statistics and are comprehensible in their entirety to me, make sense to me, so I extend some trust to him when speaks on topics that aren't fully comprehensible to me

c) His evidence, to the degree that I am able to parse it, does make sense and looks credible

c) I've looked into people who try to debunk his arguments and they fail to convince me, because they're usually the reverse of a) and b). His detractors usually don't have as many respected and trusted people respecting and trusting them. And their logical arguments don't make as much sense to me(e.g they might there is an intrinsic value in bodily integrity when arguing about kidney donation). Notably their c) category arguments that rely on studies and statistics and other forms of evidence like anecdotes might be just as convincing as Scott's, and I cannot easily determine who's more credible for myself just by trying to look at who has the better interpretation of the data.

Expand full comment

I would add epistemological humility to your list of Scott’s virtues. He is acutely aware that he could be wrong both on base facts and remote conclusions for many reasons, and that he might be ignorant of some of those reasons. That manifests from time to time when he notes publicly how he was wrong about something. It’s sometimes a post all of its own. Contrast that with the approach of anyone in the media, who buries admissions of error in small print on page 2 or just changes electronic stories to be right after the impact of the false story has already played out.

Expand full comment

"We might want to agree, and can do so awkwardly in particular cases, but we can’t flexibly and fluidly integrate opinions with opaque sources into our thoughts."

Makes sense. Imagine if you could flexibly and fluidly integrate other opaque beliefs and value them as highly as your own. You would be strongly disincentivized from forming your own beliefs, as that takes a lot longer and you can just do a "dirty import" of someone elses' belief. Your head would quickly be filled up with opaque beliefs, that you do not really understand. And since people have differing, wrong or only-contextually-correct beliefs, you would end up with an incoherent belief network. Now whenever you try to access it to make a decision, the various beliefs would then interfere with one another wildly and leave you entirely confused and uncertain. So strongly discounting à priori other people's mere stated opinions for forming decision-relevant beliefs is necessary to make sense of the world.

Expand full comment

All the evidence shows that it is exceedingly easy to get into someone's head, provided that one does not take the rational route.

Expand full comment

Indeed. Emotions get a free pass.

Expand full comment

Disappointingly much easier than getting into their pants.

Expand full comment

where precision multiplies options (or reflects market-as-real-world-diversity we also get opacity)

https://whyweshould.substack.com/p/communications-opacity-and-the-blur

I can be very precise in my own idiolect and get nowhere.

Expand full comment

This seems to be an argument against prediction markets, because they provide just the probability estimate for a complex question. And if it is difficult for people to update on that, the more likely outcome seems to be that they take it as an argument if it suits them. It seems a better instrument would a something that lays all the reasoning behind the estimate bare.

Expand full comment

I think the opposite is true. I find it rather easy to understand people. Why people disagree is your underlying question though. I think at least 20% and maximum 80% of it is just fighting about the correct words and definitions.

Expand full comment

People get mad at me for writing simply, but it does seem to cut down on arguments

Expand full comment

I expect you may consider this obvious Robin, but I agree with doopydoo that one of the primary ways we overcome the opacity is through trust.

We might attempt to define this a little more by saying that even if someone's path to an opinion on subject Z is opaque to us, if we both understand their path to prior opinions X and Y (those paths are less opaque to us), and we agree with those paths, then we give them some credit for their path to Z.

In other words, even though their path to Z may be opaque, our past experience with their less opaque opinions gives us trust that the methods or process they used to get to Z is valid. So, we'll give their opinion more weight than a truly opaque opinion.

Of course, just because we trust that someone got to Z through a valid path doesn't mean we actually know what that path is - it's still actually opaque to us. So, trust allows us to pretend that some other people's opinions are less opaque to us than they really are. But though I used the word "pretend" intentionally, I don't think this approach to opaqueness is flawed - I think it's a valid approach to filtering opaque opinions without investing tons of effort into piercing the opaqueness.

Expand full comment

Im not sure just how subconscious this opacity is. I’ve certInly encountered people who will refuse to answer questions that help you decompose the structure of what got them there and it’s hard to believe that’s not by design - especially when their instrumental rationality seems to work fine.

Expand full comment

Maybe this points to the fact that discourse requires that people build a shared language to understand each other's POV? It is like two graphs that exist and may have overlaps but we need to make those connections to be able to have an understanding.

Expand full comment

One way to understand someone else's view is to try to make their argument for them. That is, you pretend you're a lawyer assigned to argue the other side, and you try to come up with the most convincing arguments you can. It sounds silly but this type of playacting engages a less defensive and more creative mode of thought.

Consider however how impressive it is that we can communicate at all: That with just a few hundred bytes of information transfer I can cause 10^10 neurons in your brain to rapidly coordinate into the same thought I had at the time of writing. This extreme level of data compression is of course made possible by the large amount of prior knowledge we share. To say that another way, when the decompression doesn't work on the receiving end, sometimes it's because your (unstated) prior knowledge differs from the sender's in some crucial way. As you point out in this essay, it's difficult to debug because that prior knowledge is unstated.

Expand full comment

1. Your post's title, by sheer coincidence, also has a technical linguistic understanding because "opacity", "block" and, obviously, "agreement" are all terms there.

2. The thrust of the post is correct. However, we can also infer general tendencies of others' thinking that we consider wrong. Like "he always overestimates", "she always neglects social factors", "humans routinely fail to multiply". These can produce predictable directions of disagreement regardless of whether we know the cause for the general tendency.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Opacity should block agreement more than it already does! People constantly say things for dishonest and irrational reasons, chiefly because they think saying those things will gain them status or increase their unity with their peers and superiors, or because they stand to make a profit by having other people believe their deception. If you can't follow the reasoning behind another person's claim yourself, then there's a very good chance that their reason for making that claim was dishonest or irrational.

It's a social *problem* when people just accept what other people say without checking up on the exact reasoning and evidence for themselves. That's how misinformation spreads, and how religions work, and how authoritarian states operate. That's not a method that tends towards greater rationality on a societal level!

Expand full comment

I suppose you mean incomprehensible (impossible to understand), not incompressible (random). Tho the latter, taken literally, seems an interesting idea.

Expand full comment
author

Yes; fixed; thanks.

Expand full comment

Just to mention again that people also appear to disagree due to value differences. For example, the tobacco exec stating that their products do not cause cancer. For maxiumum impact this should look like a genuine factual disagreement.

Expand full comment