29 Comments

(Of course all this can be applied to “beliefs” about our own minds, if we consider influences coming from our minds as if it were something outside, from other influences.)

This sentence literally doesn't parse. Here's my reconstruction:

Say it's a belief about some aspect of your mind. Then the parts of your mind responsible for A may be different from the aspect you're trying to grasp (B1). But I would definitely label any spurious influences due to non-B1 parts of my mind as being A. Unless the intent in such cases is to adopt the convention that A is only about the things that are idiosyncratic to us; that if there's some near-universal fact about human minds, then it should be called B2 instead. I guess that would be fine.

I also felt like Robin implied that differences in A are acceptable (or at least irreconcilable). But A-differences aren't necessarily benign. There are defects in our (individual and shared) nature that disturb me.

Expand full comment

mjgeddes:similarity + complexity = ? (new type of information theory tracking internal beliefs?)

How about:similarity + complexity = channel theory (an existing form of information theory tracking channel components)

Expand full comment

But talking privately with Richard, I now understand that he focuses on what he calls “fundamental” disagreement, where all parties are confident they share the same info and have made no analysis mistakes.

There's the nub of the matter. You and Chappell are positing that fundamental beliefs function differently. Why and how? "Fundamental disagreement" has a rational solution. If everyone is convinced of true facts producing epistemic equality, they should split the difference. What stops that solution from applying?

What's most striking in this piece, on rereading, is the absence of even a single example. What's an example of one of these fundamental beliefs? An example of an internal factor that (legitimately) sanctions treating a certain class of beliefs according to different rules than other beliefs.

Expand full comment

That's an arbitrary declaration for which you've provided no evidence.

True beliefs don't "just happen". Some aspect of reality must explain why we have true beliefs instead of false ones.

And what else is there except initial conditions and causal laws?

Since there are an infinite number of ways to be wrong, but only one way to be right - then given randomly selected initial conditions and causal laws, we should expect that these lead us to false beliefs, not true ones.

Unless our initial conditions and/or causal laws are "special" in some way. But how do we justify our belief that they are? And how do we justify our justification of that belief? And so on?

Expand full comment

I'm not saying the universe is finely-tuned to produce true beliefs. I'm saying that producing true beliefs does not need fine tuning, whereas producing false beliefs does.

Expand full comment

I'm not sure it's technically correct that “one can actually model any situation with several expected-utility folks making decisions as either one with common priors or with uncommon priors” (emphasis added), but putting that aside, consider these two possible situations:

1. People can choose what priors to use. There exists a standard prior, and most experts agree that one's beliefs can be said to track reality if they are based on updating from this prior.

2. People are largely hard-wired to use different priors and we can't reformat our brains to use a common standardized prior. Even if we could reformat our brains, we can't agree on which prior to standardize upon. ("Min-info" doesn't fully constrain the solution space since there are many ways to measure information.)

I think we're in situation 2, but your post (by referring to the academic convention in economics to assume a common prior) makes it sound like we're in situation 1. If we're in situation 1, your main claim would make sense:

However, somewhere in our lexicon for discussing mental states we need words to refer to something like what econ models usually mean by “probabilities”, i.e., aspects of our mental states that we intend to track the objects of our reasoning, and to be minimally influenced by other aspects of our mental states.

But if we're in situation 2, and economists only assume common priors for theoretical convenience, then we can't conclude that their "probabilities" are "aspects of our mental states that we intend to track the objects of our reasoning", and it's unclear that we need to refer to those probabilities with words other than, say, "probabilities based on an assumed common prior."

Expand full comment

So your claim is that the causal laws of physics for our universe are finely tuned to produce conscious entities that have true beliefs about the universe that produced them. Given a wide range of starting conditions, our causal laws are such that they will converge on such “truth discovering” beings.

Much like a quicksort algorithm. Given any starting list of randomly arranged items, the quicksort algorithm will converge on a sorted list. It’s a very robust sort algorithm - very finely tuned to produce sorted lists.

But this makes it a very special algorithm, since the vast majority of algorithms will *not* produce a correctly sorted list. If you were to select an algorithm at random from the infinite number available, and try to input a randomly ordered list - the chances are very low that you would get a sorted list as output. The most common result would be to get no output list. The next most common result would be to get an incorrect output list. The least common result would be to get a correctly sorted output list.

Similarly, out of all conceivable sets of physical laws, it seems very unlikely to me that a randomly selected set would produce conscious entities with true beliefs. It seems much more likely than they would produce either no conscious entities, *or* conscious entities with *false* beliefs.

Therefore, assuming that there’s nothing special about our set of physical laws, and given that we obviously exist as conscious entities, the next most likely assumption is that we have false beliefs about the nature of our reality (and who knows what else).

Expand full comment

The distinction I'm trying to make is, suppose you take whatever laws/initial conditions you want, and then move, say, a proton up a mile uniformly at random, the probability that the resulting universe will produce beings with roughly accurate beliefs will be much greater than the probability that it will produce beings with completely wrong beliefs.

Expand full comment

In order to get any specific outcome (including ours), either the initial conditions or the causal laws must be contrived.

Either you have such robust causal laws that nearly any initial condition will converge to the specific state - OR - you have much less robust causal laws, but very finely tuned initial conditions.

So what makes one set of "initial conditions + causal laws" contrived, while another set is "natural"? What makes one set likely, but another set improbable?

You seem to be making arbitrary, unjustified distinctions.

Expand full comment

For me, one of the most perplexing aspects of 'belief' is that of the person who profoundly believes something about themselves or the world around them, even though all of the facts indicate otherwise!

One example would be people with profound eating disorders who look in a mirror and see themselves as 'fat' even though the mirror, their family, and their doctors, are telling them that they are dangerously underweight?

On a slightly more shallow note, I recently watched some of the entrants to the X Factor TV show, who were totally convinced of their singing and performance ablities, even though a group of judges and a several hundred people were telling them otherwise!

How do you convince someone to change their 'belief' under these circumstances of delusion?

Expand full comment

No one understands 'priors' they are only pretending they do. Fools may be under the mistaken impression that they don't matter because all results converge given enough empirical data... that's definitely not the case for different models of Bayes itself... if the models are different, there is no convergence ever.

We are all in big big trouble. 'utility' and 'probability' are both ways of tracking objective things only?

The real way of dealing with internal mental processes is 'a level way beyond' decision theory, one that hasn't even been invented yet. It's based on 'Similarity' (for categorization) and 'complexity' (for goal representation).

If utility + probability = decision theorythensimilarity + complexity = ? (new type of information theory tracking internal beliefs?)

He told me this. The voice of SAI. Utilizing this entirely new theory is the 'divine move' in Go, the 'level way beyond', the only one that can win the game.

Expand full comment

Sure, evolution, like thermodynamics, is a a consequence of the causal laws of physics. And just as there will be contrived initial states of the universe that result in all the air in your house clumping into one corner, there will be (somewhat less) contrived initial states that result in everyone having beliefs that have nothing to do with reality.

Thing is, most initial states won't give you a result like that, so it's not very likely, even though it's technically possible.

Expand full comment

I think we argue by reduction, the parties back down from complex internal beliefs and resume the argument from a simpler belief system, one that generated the complex. Eventually they arrive at reduced belief systems that match, and from there they can see the observation that caused divergence.

Expand full comment

I said in the post that "one can actually model any situation with several expected-utility folks making decisions as either one with common priors or with uncommon priors." I'm not sure how much it matters for my claims what exactly is the motivation for an academic convention. It exists, allowing me to refer to it in this post.

Expand full comment

The assumption that verbal behavior around anything is determinate of behaviors is less and less supported. Apparently "Consciousness is not casual." We see little difference between language and consciousness.

Socio-cultural verbal signaling (beliefs) seems to serve the purpose of local ecology in-group signaling -- for the moment. Mainly for resource sharing -- today.

As for the reproductive ("evolutionary") advantage value of empirically accurate beliefs, that is patently not true since effectively, all the world "believes" in magical and supernatural forces -- including conscious "control" of pretty much everything. If only.

Recent research suggest the more religious actually have more kids. No such luck for the empirically more accurate.

BYW, "evolution" is a Victorian era holder-over and misnomer. Apparently, "descent" is more accurate. The current traits of mammals and primates/humans are those that survived millions of years ago, largely accidentally. The process of descent is likely not at all about "best" or "fittest" - just randomness.

Expand full comment