13 Comments

Well besides idea futures and instruments, or whatever current financial practices might have similar effects...

"Spreadsheets" for idea sets and conflicting sets of idea sets. Sharable, with versioning and visualization. Discussion systems so one is putting commentary on proposed modifications, or pinning-downs, or ramifications, of assumptions or changes to them, and having conversation threads about individual change/exploration attempts.

In math you start with arithmetic, which works with constants, and then go to algebra, with variables, and calculus, with processes of change, and processes determined by constraints on changes.

Spreadsheets oddly straddle arithmetic and algebra: what you see on the surface is constants, but you can change the input constants and see concrete ramifications even if just hypothetical. Playing like that is something that seems to be helpful to humans anyway.

Logic and probability sort of start on the arithmetic-algebra-calculus route for ideas. What would help is more multiple-human-interfaced systems for that. In the sense that, e.g., Facebook supports a whole culture and set of human practices, not just a shared database of posts.

If you look at politics, journalism, and ideology-sports in general, we already have entrenched systems that may have (semi-false) entrenched meta-rationales (govt by the people, marketplace of ideas, balance of powers, etc.), but are absolutely known to be incoherent and contradictory, formed by wars and compromises on the base policy-creation/modification level. In these systems we develop lore about all the failure modes and all the patching and kludging and reform, refactoring and reconciliation attempt styles. E.g. in the newspaper my roommate reads, "pork" is a common word in headlines.

Having and using meta-expertise about the messed-upness of systems may serve to allow problems to survive longer, but it also has something to do with treating idea-sets as semi-fluid and managing changes to them.

Expand full comment

"I can clearly feel my own reluctance to consider theories wherein the world is not as it appears, because we are being fooled by gods, simulation sysops, aliens, or a vast world elite conspiracy. Sometimes this is because those assumptions seem quite unlikely, but in other cases it is because I can see how much I’d have to rethink given such assumptions."

I think another reason to a priori mark these 'conscious-agents-did-it' explanations downwards is because they tend to be tweakable to adapt to a wide range of possible observations - so the theory schemas they represent are, almost by construction, very hard to falsify.

Expand full comment

To the extent to which the various causes of opinion entrenchment have tended to estimably influence securities pricing, I guess that they have mostly been arbitraged out.

The analogous thing in the domain of academic theories seems to be most directly done via idea futures. I suppose a “poor person’s” version of this, one which doesn’t require the input of crowds, would be to model scientific idea retrenchment resrospectively, then apply it prospectively as one input to research topic selection. (This would I think be more relevant to the academic analogue of long term traders than to short term ones).

Expand full comment

It is annoying to publish something that is clearly right, and have it ignored by people who persist in accepting the wrong alternative that you have refuted. But if you can understand this phenomenon—if you have a theoretical explanation (here: irrational investment in theory analysis and in feeling oneself to be a good person, plus a network effect—wanting to join the conversation of elites)—the situation is a bit more tolerable.

But there is practically no hope “to correct for the biases that entrenchment induces”: human nature is incorrigible.

Expand full comment

Once more into the breach dear friends!

Expand full comment

Living organisms seem to be intrinsically constituted to deceive. Folly of Fools by Robert Trivers offers an excellent explanation. That explanation addresses twisting reason by individuals. Why such twisting gains group acceptance is in need of an explanation not yet plausibly offered (fear?) Examples where accepted "truth" is actually consensus are not hard to find (read history).An antidote may be to only consider (analyze) situations where the outcome is practical and the feedback is almost immediate. Such thinking has been traditionally criticized by intellectuals as folk reasoning and a corresponding view is held by the "folks".Maybe entropy applies to intelligent reasoning as well as time goes on and opportunities to profit dwindle.

Expand full comment

The obvious big question here is: how can we best change our styles of thought, talk, and interaction to correct for the biases that entrenchment induces?

Isn't that the answer?

Expand full comment

I definitely worry about entrenchment, even outside of academic investigation. At my best I hope the entrenchment of a certain opinion is because I've gotten years of evidence that support it.

Small thing, do you mean accrue instead of accurate in " As our software system accurate features?"

Expand full comment

Fixed.

Expand full comment

I assume you meant "the high rate of material transfer between early Earth and early Mars" instead of "life transfer".

Expand full comment

For example, people often say “I just can’t believe Fred’s dead”, meaning not that the evidence of Fred’s death isn’t sufficient, but that it will take a lot of work to think through all the implications of this new fact. The existence of Fred had been a standard assumption in their analysis.

This is legitimately one of the funniest things I've ever read.

Expand full comment

For a few years I've harbored a theory that one of the key ways our beliefs become entrenched is simply by stating them. Situations arise where we have cause to explain some belief we have, and every time we articulate it it gets reinforced in our own mind.

Is simply thinking about how you would articulate a belief sufficient cause reinforcement? I don't know. I suspect there are disconnected processes in the brain that may actually require it to be heard (or seen!) to have that effect. It's pretty clear that hearing others make some claim repeatedly has an effect on our brains, could it be even more so when we ourselves repeat claims?

Is it a physical process or a subconscious cognitive process? Dunno. If it's a cognitive process, it may be that some part of our mind doesn't want to create dissonance by making future claims that disagree with our past claims, and the dissonance is resolved by adjusting our priors in favor of the stated belief.

More interestingly: What happens if, whether due to time, technical or social constraints, we find ourselves in a position where we're not able to fully and clearly articulate a belief? That is, what happens if we're frequently only able to communicate the belief in a low-resolution form (e.g. a 60 second soundbyte or 280 characters)? Does the low-resolution version of the belief get reinforced?

Expand full comment

Re: "...haven’t I already considered enough weird stuff for one person?" Yes! Yes, you have, and I doubt any reader here will disagree. But that doesn't mean you shouldn't do even more. And more is expected of you. Onward, then.

Expand full comment