67 Comments
User's avatar
Overcoming Bias Commenter's avatar

Yvain has come closest to the truth here. As he states, the objections people make to the wager are ad hoc; they would reject it even if all the objections were known to be false.

Why is this? As I've stated before, human beings naturally have a bounded utility function, and anyone who decides to act as though he had an unbounded utility function, is deciding to act like a fanatic. With this bounded utility function, there is little expected value from anything with a sufficiently low probability. But if someone were really willing to act as though his utility function were unbounded, he would become a fanatic... and he would accept the wager.

Expand full comment
Overcoming Bias Commenter's avatar

I find all of the standard tricks used against Pascal's Wager intellectually unsatisfying because none of them are at the root of my failure to accept it. Yes, it might be a good point that there could be an "atheist God" who punishes anyone who accepts Pascal's Wager. But even if a super-intelligent source whom I trusted absolutely informed me that there was definitely either the Catholic God or no god at all, I feel like I would still feel like Pascal's Wager was a bad deal. So it would be dishonest of me to say that the possibility of an atheist god "solves" Pascal's Wager.

The same thing is true for a lot of the other solutions proposed. Even if this super-intelligent source assured me that yes, if there is a God He will let people into Heaven even if their faith is only based on Pascal's Wager, that if there is a God He will not punish you for your cynical attraction to incentives, and so on, and re-emphasized that it was DEFINITELY either the Catholic God or nothing, I still wouldn't happily become a Catholic.

Whatever the solution, I think it's probably the same for Pascal's Wager, Pascal's Mugging, and the Egyptian mummy problem I mentioned last month. Right now, my best guess for that solution is that there are two different answers to two different questions:

Why do we believe Pascal's Wager is wrong? Scope insensitivity. Eternity in Hell doesn't sound that much worse, to our brains, than a hundred years in Hell, and we quite rightly wouldn't accept Pascal's Wager to avoid a hundred years in Hell. Pascal's Mugger killing 3^^^3 people doesn't sound too much worse than him killing 3,333 people, and we quite rightly wouldn't give him a dollar to get that low a probability of killing 3,333 people.

Why is Pascal's Wager wrong? From an expected utility point of view, it's not. In any particular world, not accepting Pascal's Wager has a 99.999...% chance of leading to a higher payoff. But averaged over very large numbers of possible worlds, accepting Pascal's Wager or Pascal's Mugging will have a higher payoff, because of that infinity going into the averages. It's too bad that doing the rational thing leads to a lower payoff in most cases, but as everyone who's bought fire insurance and not had their house catch on fire knows, sometimes that happens.

I realize that this position commits me, so far as I am rational, to becoming a theist. But my position that other people are exactly equal in moral value to myself commits me, so far as I am rational, to giving almost all my salary to starving Africans who would get a higher marginal value from it than I do, and I don't do that either.

Expand full comment
Overcoming Bias Commenter's avatar

The idea is that you do not know the truth, so you only think in possibilities. The possibility you described should be filed under the list of possibilities that do not require belief, just like the possibility that that there is no higher power or the possibility that there is a higher power but that it does not differentiate between "good" and "bad" persons. It doesn't add anything revolutionary to the idea, just makes it a little more detailed.

Expand full comment
Overcoming Bias Commenter's avatar

Pascal Wager will fail to work if god is true, afterlife is true and religion are all fake and god will judge our afterlife by what we do in the world.

Expand full comment
joe ho's avatar

Pascal Wager will fail to work if God is true, Afterlife is true, and all Religion all fake, plus God will judge and reward us by what we do in the world.

Say, if a person work for money and a person work for charity, you will appreciate the people work for charity more. Similarity, if you make charity for religion, and make charity but not for religion, god will appreciate which one ? probably the person without religion

Expand full comment
Overcoming Bias Commenter's avatar

The christian Bible does not explicitly state the infinite nature of hell, and this must be introduced; but if hell can be introduced to the tradition, why can't it be introduced to support any notion, Marxism for instance, even if it didn't itself originally include any reference to infinite suffering?

It appears impossible to meaningfully decrease our probability of infinite suffering, if a god who would create hell exists, as we cannot know the criteria for avoiding hell even if we know the correct God, as we have no guarantee that heaven is an infinite respite from hell and as we have an infinite number of religions to choose between. No tradition appears consistent on the correct means of attaining salvation. A being capable of creating hell appears to my mind likely to continue damning people after they enter heaven. We have literally an infinite number of religions to choose between as religions which are unknown to man, such a non-evidentialist damning God or the god who really rally hates cows mentioned above, have just as much likelyhood of existing than religions not known to man (probably more so as such gods would be simpler, not having to have visited Earth) and the chance of choosing the correct religion would be rendered infantismal enough to counter even infinite reward or punishment.

Expand full comment
Overcoming Bias Commenter's avatar

burger flipper, you make a very good point. I've been talking about Pascal's wager mainly from an egoist perspective, but utilitarians certainly should be very concerned about their fellow man. While there are a number of missionaries out there, I think the reason we don't see more is similar to the reason more people don't give away most of their income to charity. Also, most Christians are not utilitarians.

I think the utilitarian case for proselytism is somewhat weaker than the egoist case for personal conversion, because while I can imagine only a few scenarios other than hell according to which actions I take now will determine whether I suffer eternally, I can think of lots of scenarios in which actions I take now will prevent infinite suffering on the part of others. This is partly because, when I adopt a utilitarian concern for all sentient organisms, I can prevent infinite suffering by preventing finite amounts of suffering on the part of infinitely many organisms, not just suffering of infinite duration on the part of one particular organism.

Expand full comment
Overcoming Bias Commenter's avatar

Why don't believers follow through on the implications of Pascal's wager? If they truly love their fellow man and believe that their fellow man is in danger of eternal torment, how can they ever spend one spare moment not proselytizing?

I'd be much more likely to be swayed by an argument if I saw it applied consistently: to those among the saved as well as those still in danger.

Expand full comment
Overcoming Bias Commenter's avatar

As the expected computational capacity of our world, conditional on its 'basement' status, goes down, the probability that it is a simulation in a world with more computation-friendly laws goes up, and at some point appealing to the Dark Lords of the Matrix has a better expected value than the alternative. The laws of physics in our world (laws of thermodynamics, relativity, etc) do not seem conducive to absurd (10^^^^^^^^^^^^^^^^^^^^^10) numbers of computations, and it seems plausible that a relevant fraction of worlds in Tegmark's ensemble are much more computation-friendly.

Expand full comment
Overcoming Bias Commenter's avatar

Here's a final note on Carl's hell-escape scenario. Whether the idea works seems to depend on whether one endorses causal decision theory, evidential decision theory, or something in between. My intuition lies strongly with causal decision theory (since probabilities are in the mind and changing your beliefs about who you are doesn't actually change who you are), but there appears to be a large literature on this debate, and I don't doubt that the evidential decision theorists have some good arguments.

steven, interesting point. However, it's not clear to me why the egoist case should parallel the utilitarian one. I would think an egoist would care only about his own particular instantiation, not all of the instantiations of himself that might be run. I guess this gets back to the Unification vs. Duplication discussion above.

Expand full comment
Overcoming Bias Commenter's avatar

I put an argument on my blog against simulations being very practically relevant: Strike the Root

Expand full comment
Overcoming Bias Commenter's avatar

I didn't assert that infinite utility is impossible. My point was that because human brains are finite, they naturally calculate according to a bounded utility function.

This doesn't mean that I'm saying the wager is wrong, but that normal humans cannot accept it, because their brains do not work that way. If someone based on some theory believes that we should act as though we had unbounded utility functions, he is trying to get around his own brain, and he may well consequently accept the wager (or some variant, as Robin suggested.)

Expand full comment
Overcoming Bias Commenter's avatar

Carl: U1 and U2 are psychologically identical, so their decisions will be correlated.

Where does this correlation come from? If you mean that U1 and U2 have psychologically indistinguishable histories up to the present, that implies nothing about their correlation in the future, does it? U1 and U2 were picked from all the mind-histories in the universe as two that happened to share the same history up to this moment. But unless there's some causal mechanism correlating them, why does that tell us anything about future moments? Or we could say that we chose U1 and U2 to be two mind-histories that are identical in both the past and the future, but there are lots of other mind-histories that are identical in the past only, and U1, U2, ... would have no way to know that they aren't one of those. You can't get a relevant correlation by "data mining" of random noise -- somewhere causation has to be involved.

One causal mechanism is that U1 is programmed to copy whatever U2 does. Yes, there's correlation, but it's in the wrong direction to help U1.

What am I missing here?

Nick: It seems to me that you should consider yourself as the set {U1, U2, ...} = {all systems having this experience}, not as one member.

Does that commit you to the position that Nick Bostrom calls "Unification" on p. 186 of this piece? If so, what do you think of his arguments against Unification in the subsequent three pages? If not, perhaps you could elaborate your position further, or point me to a reference?

Robin: The Christian God scenario doesn't weigh in my beliefs much larger than many other possible gods, but I suspect I may well in principle succumb to some other wagers-to-placate-gods.

Can you think of other wagers that you find compelling? I'd be quite interested to hear them.

Expand full comment
Overcoming Bias Commenter's avatar

U1 and U2 are psychologically identical, so their decisions will be correlated.

Expand full comment
Overcoming Bias Commenter's avatar

U1 and U2 are by stipulation the same, except that one is in a simulation, so no Predictor is necessary.

It seems to me that you should consider yourself as the set {U1, U2, ...} = {all systems having this experience}, not as one member.

I agree with Carl here, at least assuming decision theory works normally in a Big World.

Expand full comment
Overcoming Bias Commenter's avatar

Thanks for the clarification. I'm still left with this question, though: who plays the part of the Predictor, ensuring that if U1 commits to simulating himself given the chance, U2 will do so also?

In Newcomb, we may not understand the mechanism by which choosing only one box guarantees the $1,000,000, but we know that one-boxing has to work because the Predictor is always right. Where is the analogy in the hellfire case? Why do we know that "one-boxing" (committing to simulations) has to do anything?

If I'm U2, then perhaps I can affect U1 because U1 is a simulation of U2. But if I'm U2, I'm already not being simulated to be tortured....

Expand full comment