How do and should we form and change opinions? Logic tells us to avoid inconsistencies and incoherences. Language tells us to attend to how meaning is inferred from ambiguous language. Decision theory says to distinguish values from fact opinion, and says exactly how decisions should respond to these. Regarding fact opinion, Bayesian theory says to distinguish priors from likelihoods, and says exactly how fact opinion should respond to evidence.
Simple realism tells us to expect errors in actual opinions, relative to all of these standards. Computing theory says to expect larger errors on more complex topics, and opinions closer to easily computed heuristics. And many kinds of human and social sciences suggest that we see human beliefs as often like clothes, which in mild weather we use more to show our features to associates than to protect ourselves from the elements. Beliefs are especially useful for showing loyalty and morality.
There’s another powerful way to think about opinions that I’ve only recently appreciated: opinions get entrenched. In biology, natural selection picks genes that are adaptive, but adds error. These gene choices change as environments change, except that genes which are entangled with large complex and valued systems of genes change much less; they get entrenched.
We see entrenchment also all over our human systems. For example, at my university the faculty is divided into disciplines, the curricula into classes, and classes into assignments in ways that once made sense, but now mostly reflect inertia. Due to many interdependencies, it would be slow and expensive to change such choices, so they remain. Our legal system accumulates details that become precedents that many rely on, and which become hard to change. As our software system accrue features, they get fragile and harder to change. And so on.
Beliefs also get entrenched. That is, we are often in the habit of building many analyses from the same standard sets of assumptions. And the more analyses that we have done using some set of assumptions, the more reluctant we are to give up that set. This attitude toward the set is not very sensitive to the evidential or logical support we see for each of its assumptions. In fact, we are often pretty certain that individual assumptions are wrong, but because they greatly simplify our analysis, we hope that they are still enable a decent approximation from their set.
When we use such standard assumption sets, we usually haven’t thought much about the consequences of individually changing each assumption in the set. As long as we can see some plausible ways in which each assumption might change conclusions, we accept it as part of the set, and hold roughly the same reluctance to give it up as for all the other members.
For example, people often say “I just can’t believe Fred’s dead”, meaning not that the evidence of Fred’s death isn’t sufficient, but that it will take a lot of work to think through all the implications of this new fact. The existence of Fred had been a standard assumption in their analysis. A person tempted to have an affair is somewhat deterred from this because of their standard assumption that they were not the sort of person who has affairs; it would take a lot of work to think through their world under this new assumption. This similarly discourages people from considering that their spouses might be having affairs.
In academic theoretical analysis, each area tends to have standard assumptions, many of which are known to be wrong. But even so, there are strong pressures to continue using prior standard assumptions, to make one’s work comparable to that of others. The more different things that are seen to be explained or understood via an assumption set, the more credibility is assigned to each assumption in that set. Evidence directly undermining any one such assumption does little by itself to reduce use of the set.
In probability theory, the more different claims one adds to a bundle, the less likely is the conjunction of that bundle. However, the more analyses that one makes with an assumption set, the more entrenched it becomes. So by combining different assumption sets so that they all get credit for all of their analyses, one makes those sets more, not less, entrenched. Larger bundles get less probability but more entrenchment.
Note that fictional worlds that specify maximal detail are maximally large assumption sets, which thus maximally entrench.
Most people feel it is quite reasonable to disagree, and that claim is a standard assumption in most reasoning about reasoning. But a philosophy literature did arise wherein some questioned that assumption, in the context of a certain standard disagreement scenario. I was able to derive some strong results, but in a different and to my mind more relevant scenario. But the fact of my using a different scenario, and being from a different discipline, meant my results got ignored.
Our book Elephant in the Brain says that social scientists have tended to assume the wrong motives re many common behaviors. While our alternate motives are about as plausible and easy to work with as the usual motives, the huge prior investment in analysis based on the usual motives means that few are interested in exploring our alternate motives. There is not just theory analysis investment, but also investment in feeling that we are good people, a claim which our alternate assumptions undermine.
Even though most automation today has little to do with AI, and has long followed steady trends, with almost no effect on overall employment, the favored assumption set among talking elites recently remains this: new AI techniques are causing a huge trend-deviating revolution in job automation, soon to push a big fraction of workers out of jobs, and within a few decades may totally surpass humans at most all jobs. Once many elites are talking in terms this assumption set, others also want to join the same conversation, and so adopt the same set. And once each person has done a lot of analysis using that assumption set, they are reluctant to consider alternative sets. Challenging any particular item in the assumption set does little to discourage use of the set.
The key assumption of my book Age of Em, that human level robots will be first achieved via brain emulations, not AI, has a similar plausibility to AI being first. But this assumption gets far less attention. Within my book, I picked a set of standard assumptions to support my analysis, and for an assumption that has an X% chance of being wrong, my book gave far less than X% coverage to that possibility. That is, I entrenched my standard assumptions within my book.
Physicists have long taken one of their standard assumptions to be denial of all “paranormal” claims, taken together as a set. That is, they see physics as denying the reality of telepathy, ghosts, UFOs, etc., and see the great success (and status) of physics overall as clearly disproving such claims. Yes, they once mistakenly included meteorites in that paranormal set, but they’ve fixed that. Yet physicists don’t notice that even though many describe UFOs as “physics-defying”, they aren’t that at all; they only plausibly defy current human tech abilities. Yet the habit of treating all paranormal stuff as the same denied set leads physicists to continue to staunchly ridicule UFOs.
I can clearly feel my own reluctance to consider theories wherein the world is not as it appears, because we are being fooled by gods, simulation sysops, aliens, or a vast world elite conspiracy. Sometimes this is because those assumptions seem quite unlikely, but in other cases it is because I can see how much I’d have to rethink given such assumptions. I don’t want to be bothered; haven’t I already considered enough weird stuff for one person?
Life on Mars is treated as an “extraordinary” claim, even though the high rate of rock transfer between early Earth and early Mars make it nearly as likely that life came from Mars to Earth as vice versa. This is plausibly because only life on Earth is the standard assumption used in many analyses, while life starting on Mars seems like a different conflicting assumption.
Across a wide range of contexts, our reluctance to consider contrarian claims is often less due to their lacking logical or empirical support, and more because accepting them would require reanalyzing a great many things that one had previously analyzed using non-contrarian alternatives.
In worlds of beliefs with strong central authorities, those authorities will tend to entrench a single standard set of assumptions, thus neglecting alternative assumptions via the processes outlined above. But in worlds of belief with many “schools of thought”, alternative assumptions will get more attention. It is a trope that “sophomores” tend to presume that most fields are split among different schools of thought, and are surprised to find that this is usually not true.
This entrenchment analysis makes me more sympathetic toward allowing and perhaps even encouraging different schools of thought in many fields. And as central funding sources are at risk of being taken over by a particular school, multiple independent sources of funding seem more likely to promote differing schools of thought.
The obvious big question here is: how can we best change our styles of thought, talk, and interaction to correct for the biases that entrenchment induces?
Well besides idea futures and instruments, or whatever current financial practices might have similar effects...
"Spreadsheets" for idea sets and conflicting sets of idea sets. Sharable, with versioning and visualization. Discussion systems so one is putting commentary on proposed modifications, or pinning-downs, or ramifications, of assumptions or changes to them, and having conversation threads about individual change/exploration attempts.
In math you start with arithmetic, which works with constants, and then go to algebra, with variables, and calculus, with processes of change, and processes determined by constraints on changes.
Spreadsheets oddly straddle arithmetic and algebra: what you see on the surface is constants, but you can change the input constants and see concrete ramifications even if just hypothetical. Playing like that is something that seems to be helpful to humans anyway.
Logic and probability sort of start on the arithmetic-algebra-calculus route for ideas. What would help is more multiple-human-interfaced systems for that. In the sense that, e.g., Facebook supports a whole culture and set of human practices, not just a shared database of posts.
If you look at politics, journalism, and ideology-sports in general, we already have entrenched systems that may have (semi-false) entrenched meta-rationales (govt by the people, marketplace of ideas, balance of powers, etc.), but are absolutely known to be incoherent and contradictory, formed by wars and compromises on the base policy-creation/modification level. In these systems we develop lore about all the failure modes and all the patching and kludging and reform, refactoring and reconciliation attempt styles. E.g. in the newspaper my roommate reads, "pork" is a common word in headlines.
Having and using meta-expertise about the messed-upness of systems may serve to allow problems to survive longer, but it also has something to do with treating idea-sets as semi-fluid and managing changes to them.
"I can clearly feel my own reluctance to consider theories wherein the world is not as it appears, because we are being fooled by gods, simulation sysops, aliens, or a vast world elite conspiracy. Sometimes this is because those assumptions seem quite unlikely, but in other cases it is because I can see how much I’d have to rethink given such assumptions."
I think another reason to a priori mark these 'conscious-agents-did-it' explanations downwards is because they tend to be tweakable to adapt to a wide range of possible observations - so the theory schemas they represent are, almost by construction, very hard to falsify.