The topic of Pascal’s wager has been mentioned several times before on Overcoming Bias, most notably in Eliezer’s post on
One common objection is the
many gods argument: While it’s true you might be punished eternally (or, if you like, for 3^^^^3 years) if and only if you don’t follow, say, Christianity, it’s possible to imagine other scenarios where you would be punished if and only if you do follow Christianity; thus, it’s claimed, the different possibilities cancel each other out. In responding to the Pascal’s-mugging post, Michael Vassar suggested that we should have
Equal priors due to complexity, equal posteriors due to lack of entanglement between claims and facts. But are the priors really equal? Intuitively, the
anti-Christian God should take more bits to describe, since that hypothesis requires stating the entire concept of Christianity and then a little extra. I don’t know if that’s the case, but my point is it’s not obvious that, bit for bit, the hypotheses are identical in Kolmogorov complexity. Moreover, the set of relevant hypotheses is bigger than these two: There are tons of hypotheses according to which whether you follow Christianity will make a difference as to whether you suffer for 3^^^^3 years, and I’m not convinced that they all exactly cancel out in prior probability.
Moreover, is there really no entanglement? Is the probability of observing the world we do exactly the same given Christianity as given anti-Christianity? Is the probability, given Christianity, that billions of people would be persuaded of the truth of the Christian God’s message exactly the same as the probability, given anti-Christianity, that billions of people would be fooled into believing the Christian God’s message? Or for that matter, are the probabilities that billions of people will follow non-Christian religions equal under the two scenarios? And so on. There seems to be just too much data in the world for our probabilities to remain symmetric.
This relates to a second complaint about the wager: With vast amounts of data to process and an enormous space of possible religious hypotheses to search, Pascal’s wager (which is just an optimization problem) is computationally infeasible, especially for human minds. This is true, but even if we can’t find the global optimum (if one exists?), I don’t see why we shouldn’t make what local improvements we can, given our limited knowledge, processing ability, and creativity in specifying hypotheses. Just by considering a few basic factual predictions that various religions make, for example, it ought to be possible to separate hypotheses of similar prior probability by many orders of magnitude in their posteriors. We could make some progress on these back-of-the-envelope calculations even without having a full Solomonoff-inducting AI (though the latter would indeed be extraordinarily helpful).
In view of the high uncertainty surrounding the question of which religion (possibly including atheism) to choose, maybe it would be best to avoid making a commitment now, since you might learn more as time goes on that would affect your choice. Moreover, there’s a small chance that in trying to adhere to the commands of a particular religion and in surrounding yourself with fellow believers, you might blunt your ability to think rationally. This argument is fine as far as it goes (though you should also consider your probability of dying before you finally do make up your mind), but then why not spend considerable effort doing further research on the question of which religion to follow? The expected value of additional information would seem to be extraordinarily high.
You might reply that the problem of which religion to follow is overly narrow: There are lots of other projects to work on, perhaps involving more probable scenarios than does Pascal’s wager. For instance, maybe you’re aiming for physical immortality via ordinary materialist means and intend to spend all your time researching how best to stay alive until significant anti-ageing technologies kick in. Fair enough, but what if — as is true in my case — you’re more concerned about avoiding eternal suffering, rather than achieving eternal blissful life? Are there secular scenarios that would require you not to consider Pascal’s wager in the religious case in order to prevent yourself from experiencing massive amounts of suffering?
Finally, some might object to using an unbounded utility function because it leads to mathematical difficulties. I admit that I don’t like the idea of bounding utility functions, but even if we do that, can we not take the bounds big enough that we still allow speculative Pascalian scenarios to dominate over more minor,
a WordPress rating system