Gnosis

In honor of Christmas, a religious question.

Richard and Jerry are Bayesians with common priors. Richard is an atheist. Jerry was an atheist, but then he had an experience which he believes gives him certain knowledge of the following proposition (LGE): “There is a God, and he loves me.” Jerry’s experiences his knowledge as gnosis: a direct experience of divine grace that bestowed certain knowledge, period, and not conditioned on anything else at all. (Some flavors of Christianity and many other religions claim experiences like this, including prominently at least some forms of Buddhism.) In addition to believing certain knowledge of LGE, Jerry’s gnosis greatly modifies his probability estmates of almost every proposition in his life. For example, before the gnosis, the Christian Bible didn’t significantly impact the subjective probabilities of the propositions it is concerned with. Now it counts very heavily.

Richard and Jerry are aware of a disagreement as to the probability of LGE, and also the truth of the various things in the Bible. They sit down to work it out.

It seems like the first step for Richard and Jerry is to merge their data. Otherwise, Jerry has to violate one rule of rationality or another: since his gnosis is only consistent with the certainty of LGE, he can either discard plainly relevant data (irrational) or fail to reach agreement (irrational). Richard does his best to replicate the actions that got the gnosis into Jerry’s head: he fasts, he meditates on the koans, he gives money to the televangelist. But no matter what he does, Richard can not get the experience that Jerry had. He can get Jerry’s description of the experience, but Jerry insists that the description falls woefully short of the reality — it misses a qualitative aspect, the feeling of being “touched,” the bestowal of certain knowledge of the existence of a loving God.

Is it in principle possible for Richard and Jerry to reach agreement on their disputed probabilities given a non-transmissible experience suggesting to Jerry that P(LGE)=1?

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Paul, the analysis that says Bayesians with a common prior will agree does not depend at all on their experiencing the same evidence. If in fact Jerry had evidence that he was very sure of, and if Richard believed Jerry just wouldn’t make a mistake about such a thing, then all it would take would be for Jerry to tell Richard of his experience. Then Richard would just believe Jerry and they would agree.

    Humans are not Bayesians, so if human Jerry says he feels very sure, then human Richard must consider the fact that humans are often very sure yet very wrong.

  • Rob Spear

    It seems to me that the point of the story is that directly experienced evidence is not the same as communicated evidence – our channels of communication are not broad enough to give people every detail of our experiences. It is like trying to describe a classic painting to someone who hasn’t seen it.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Rob, yes of course direct experiences are different from talking – who ever thought otherwise?

  • Paul Gowder

    Robin: Isn’t it possible for a Bayesian to be completely sure and completely wrong, because that Bayesian is subject to a distortion that he can’t in principle know about? (Descartes’s demon, schizophrenia, etc.) It seems to me that Richard can rationally consider this possibility with respect to Jerry’s gnosis without Jerry being rationally obligated to consider that possibility with respect to it. The reason for the difference might be just that qualitative nature of the experience: there’s something about the experience that (allegedly) communicates its utter reliability to Jerry, but that Jerry can’t possibly communicate to Richard. So Richard discounts it because the type of experiences he is able to comprehend doesn’t include flashes of enlightenment.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Paul, it is possible for a Bayesian to have very unusual evidence, but not to be completely sure and wrong. I don’t think you understand Bayesian decision theory well enough yet to pursue a critique of Bayesian theory of disagreement. (A test of your relevant Bayesian analysis ability would be to construct a Bayesian model describing the situation you have in mind.)

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Paul, if you subject a Bayesian to enough distortion they stop being Bayesians. An extreme case: if you run a Bayesian through a food processor, they end up as hamburger. Hamburgers don’t execute Bayesian updates in response to incoming evidence. I’ll call this possibility (0), more to follow.

    Let a “Bayesian” be defined as one who updates probability distributions, in accordance with probability theory, in response to incoming evidence. There are two ways a pure Bayesian can end up completely sure and completely wrong. (Defined as 0.9999 sure and wrong, rather than 1.0 sure and wrong – a Bayesian should never say literally “1.0”.) These are the two ways:

    (1) Seeing wildly misleading evidence by pure bad luck – a fair coin that comes up twenty heads in a row, so you think it’s fixed. The evidential relations are what you think they are, but you hit on an anti-winning lottery ticket by sheer accident.

    (2) Starting out with a prior that is broken and non-self-repairing – for example, the prior is Jaynes’s “binomial monkey prior”, that each coin has independently a 0.9999 probability of coming up tails, and you are incapable of considering any other hypothesis. Then executing a Bayesian update will lead you to believe that ten thousand heads should be followed by a tail, and after seeing another head, you will still believe the next coin is almost certain to be a tail. Moreover, reflecting on your own mental mechanisms, your prior is such as to, conditioning on the fact that your mental mechanisms have done much worse than maximum entropy on the last ten thousand predictions, nonetheless expect them to be perfectly calibrated on the next round.

    Religion is somewhere between (0) and (2), mostly (0). That is, it is not broken Bayesianism, but simply not-Bayesian-at-all. Anyone who is not asleep and dreaming, knows this to be so…

  • Paul Gowder

    I wasn’t really pursuing a critique, just trying to push the boundraries, but fair enough.

  • http://utilitarian-essays.com/ Utilitarian

    Aside from the above discussion of whether a Bayesian can be completely sure and wrong, I’m interested in whether Richard ought to update his beliefs to give weight to Jerry’s LGE (even if Jerry doesn’t do the same in return). If so, why shouldn’t we update our beliefs to include strong possibilities that Christianity, Islam, Hinduism, etc. are true?

    This question is particularly relevant to the discussion of Pascal’s Wager (http://en.wikipedia.org/wiki/Pascal’s_Wager). One of the standard objections to the Wager is that, just as there might be a Christian god who punishes nonbelievers, there might be an “anti-Christian” god who punishes believers, or a “professor god” who only rewards truth seekers (http://www.infidels.org/library/modern/richard_carrier/heaven.html). If the probability of these other possibilities is the same as that of a regular Christian god, so the argument goes, then one’s prospects for entering heaven and avoiding hell are no better either way.

    However, if we take what appears to be the Bayesian approach, we find that the probability of a regular Christian god is indeed far higher. (I use Christianity, rather than Islam or Hinduism, because it’s the world’s largest religion: http://en.wikipedia.org/wiki/Major_religious_groups.) There are 2.1 billion Christians in the world, and probably only a handful of anti-Christians or believers in a professor god.

    Of course, this analysis is slightly naive. For one thing, it’s not clear that the size of a religion is directly proportional to the weight that we should give it. We may know, for instance, that a particular religion encourages its adherents to have lots of children, so that we would expect it to have larger numbers for a reason that doesn’t seem to make it more plausible.

    We may also decide to discount our credence in Christianity on the basis that some interpretations of it contradict science (evolution, geology, heliocentrism). However, there are still forms of Christianity that are perfectly consistent with science, so it’s not clear why we shouldn’t at least given them a high subjective probability.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    I wouldn’t call it a “Bayesian” argument per se, but yes the fact that a great many people believe in Christianity is, all else equal, a reason to think it true. Of course even this large group is a minority of humans overall, so by itself this wouldn’t get you over 50% confidence. And we know other things, such as that the very elite intellectuals tend to be less religious, and that people are much less able to articulate their reasons for religious beliefs, compared to other sorts of beliefs. Folding all these factors into a total estimate is of course a very complex issue.

  • Paul Gowder

    And yet… the vast, vast, overwhelming majority of the world’s population believes in SOME form of deity. So this sort of decision process would seem to (unacceptably) require a pretty decisive rejection of atheism regardless of the superior arguments in its favor, non?

    (This is another thing that worries me about normative Bayes — not just the fact that it seems to require religion, which sounds like a reductio to me — but the fact that it would seem to get different results depending on the level of generality of the belief in question.)

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Paul, not all Bayesians think that the Modesty Argument follows from Aumannlike theorems. Remember, my post on this subject was an argument *against* Modesty, and I certainly count myself a Bayesian wannabe.