11 Comments

Paul, not all Bayesians think that the Modesty Argument follows from Aumannlike theorems. Remember, my post on this subject was an argument *against* Modesty, and I certainly count myself a Bayesian wannabe.

Expand full comment

And yet... the vast, vast, overwhelming majority of the world's population believes in SOME form of deity. So this sort of decision process would seem to (unacceptably) require a pretty decisive rejection of atheism regardless of the superior arguments in its favor, non?

(This is another thing that worries me about normative Bayes -- not just the fact that it seems to require religion, which sounds like a reductio to me -- but the fact that it would seem to get different results depending on the level of generality of the belief in question.)

Expand full comment

I wouldn't call it a "Bayesian" argument per se, but yes the fact that a great many people believe in Christianity is, all else equal, a reason to think it true. Of course even this large group is a minority of humans overall, so by itself this wouldn't get you over 50% confidence. And we know other things, such as that the very elite intellectuals tend to be less religious, and that people are much less able to articulate their reasons for religious beliefs, compared to other sorts of beliefs. Folding all these factors into a total estimate is of course a very complex issue.

Expand full comment

Aside from the above discussion of whether a Bayesian can be completely sure and wrong, I'm interested in whether Richard ought to update his beliefs to give weight to Jerry's LGE (even if Jerry doesn't do the same in return). If so, why shouldn't we update our beliefs to include strong possibilities that Christianity, Islam, Hinduism, etc. are true?

This question is particularly relevant to the discussion of Pascal's Wager (http://en.wikipedia.org/wik.... One of the standard objections to the Wager is that, just as there might be a Christian god who punishes nonbelievers, there might be an “anti-Christian” god who punishes believers, or a "professor god" who only rewards truth seekers (http://www.infidels.org/lib.... If the probability of these other possibilities is the same as that of a regular Christian god, so the argument goes, then one’s prospects for entering heaven and avoiding hell are no better either way.

However, if we take what appears to be the Bayesian approach, we find that the probability of a regular Christian god is indeed far higher. (I use Christianity, rather than Islam or Hinduism, because it’s the world’s largest religion: http://en.wikipedia.org/wik... There are 2.1 billion Christians in the world, and probably only a handful of anti-Christians or believers in a professor god.

Of course, this analysis is slightly naive. For one thing, it’s not clear that the size of a religion is directly proportional to the weight that we should give it. We may know, for instance, that a particular religion encourages its adherents to have lots of children, so that we would expect it to have larger numbers for a reason that doesn’t seem to make it more plausible.

We may also decide to discount our credence in Christianity on the basis that some interpretations of it contradict science (evolution, geology, heliocentrism). However, there are still forms of Christianity that are perfectly consistent with science, so it’s not clear why we shouldn’t at least given them a high subjective probability.

Expand full comment

I wasn't really pursuing a critique, just trying to push the boundraries, but fair enough.

Expand full comment

Paul, if you subject a Bayesian to enough distortion they stop being Bayesians. An extreme case: if you run a Bayesian through a food processor, they end up as hamburger. Hamburgers don't execute Bayesian updates in response to incoming evidence. I'll call this possibility (0), more to follow.

Let a "Bayesian" be defined as one who updates probability distributions, in accordance with probability theory, in response to incoming evidence. There are two ways a pure Bayesian can end up completely sure and completely wrong. (Defined as 0.9999 sure and wrong, rather than 1.0 sure and wrong - a Bayesian should never say literally "1.0".) These are the two ways:

(1) Seeing wildly misleading evidence by pure bad luck - a fair coin that comes up twenty heads in a row, so you think it's fixed. The evidential relations are what you think they are, but you hit on an anti-winning lottery ticket by sheer accident.

(2) Starting out with a prior that is broken and non-self-repairing - for example, the prior is Jaynes's "binomial monkey prior", that each coin has independently a 0.9999 probability of coming up tails, and you are incapable of considering any other hypothesis. Then executing a Bayesian update will lead you to believe that ten thousand heads should be followed by a tail, and after seeing another head, you will still believe the next coin is almost certain to be a tail. Moreover, reflecting on your own mental mechanisms, your prior is such as to, conditioning on the fact that your mental mechanisms have done much worse than maximum entropy on the last ten thousand predictions, nonetheless expect them to be perfectly calibrated on the next round.

Religion is somewhere between (0) and (2), mostly (0). That is, it is not broken Bayesianism, but simply not-Bayesian-at-all. Anyone who is not asleep and dreaming, knows this to be so...

Expand full comment

Paul, it is possible for a Bayesian to have very unusual evidence, but not to be completely sure and wrong. I don't think you understand Bayesian decision theory well enough yet to pursue a critique of Bayesian theory of disagreement. (A test of your relevant Bayesian analysis ability would be to construct a Bayesian model describing the situation you have in mind.)

Expand full comment

Robin: Isn't it possible for a Bayesian to be completely sure and completely wrong, because that Bayesian is subject to a distortion that he can't in principle know about? (Descartes's demon, schizophrenia, etc.) It seems to me that Richard can rationally consider this possibility with respect to Jerry's gnosis without Jerry being rationally obligated to consider that possibility with respect to it. The reason for the difference might be just that qualitative nature of the experience: there's something about the experience that (allegedly) communicates its utter reliability to Jerry, but that Jerry can't possibly communicate to Richard. So Richard discounts it because the type of experiences he is able to comprehend doesn't include flashes of enlightenment.

Expand full comment

Rob, yes of course direct experiences are different from talking - who ever thought otherwise?

Expand full comment

It seems to me that the point of the story is that directly experienced evidence is not the same as communicated evidence - our channels of communication are not broad enough to give people every detail of our experiences. It is like trying to describe a classic painting to someone who hasn't seen it.

Expand full comment

Paul, the analysis that says Bayesians with a common prior will agree does not depend at all on their experiencing the same evidence. If in fact Jerry had evidence that he was very sure of, and if Richard believed Jerry just wouldn't make a mistake about such a thing, then all it would take would be for Jerry to tell Richard of his experience. Then Richard would just believe Jerry and they would agree.

Humans are not Bayesians, so if human Jerry says he feels very sure, then human Richard must consider the fact that humans are often very sure yet very wrong.

Expand full comment