67 Comments

The idea is that you do not know the truth, so you only think in possibilities. The possibility you described should be filed under the list of possibilities that do not require belief, just like the possibility that that there is no higher power or the possibility that there is a higher power but that it does not differentiate between "good" and "bad" persons. It doesn't add anything revolutionary to the idea, just makes it a little more detailed.

Expand full comment

Pascal Wager will fail to work if god is true, afterlife is true and religion are all fake and god will judge our afterlife by what we do in the world.

Expand full comment

Pascal Wager will fail to work if God is true, Afterlife is true, and all Religion all fake, plus God will judge and reward us by what we do in the world.

Say, if a person work for money and a person work for charity, you will appreciate the people work for charity more. Similarity, if you make charity for religion, and make charity but not for religion, god will appreciate which one ? probably the person without religion

Expand full comment

The christian Bible does not explicitly state the infinite nature of hell, and this must be introduced; but if hell can be introduced to the tradition, why can't it be introduced to support any notion, Marxism for instance, even if it didn't itself originally include any reference to infinite suffering?

It appears impossible to meaningfully decrease our probability of infinite suffering, if a god who would create hell exists, as we cannot know the criteria for avoiding hell even if we know the correct God, as we have no guarantee that heaven is an infinite respite from hell and as we have an infinite number of religions to choose between. No tradition appears consistent on the correct means of attaining salvation. A being capable of creating hell appears to my mind likely to continue damning people after they enter heaven. We have literally an infinite number of religions to choose between as religions which are unknown to man, such a non-evidentialist damning God or the god who really rally hates cows mentioned above, have just as much likelyhood of existing than religions not known to man (probably more so as such gods would be simpler, not having to have visited Earth) and the chance of choosing the correct religion would be rendered infantismal enough to counter even infinite reward or punishment.

Expand full comment

burger flipper, you make a very good point. I've been talking about Pascal's wager mainly from an egoist perspective, but utilitarians certainly should be very concerned about their fellow man. While there are a number of missionaries out there, I think the reason we don't see more is similar to the reason more people don't give away most of their income to charity. Also, most Christians are not utilitarians.

I think the utilitarian case for proselytism is somewhat weaker than the egoist case for personal conversion, because while I can imagine only a few scenarios other than hell according to which actions I take now will determine whether I suffer eternally, I can think of lots of scenarios in which actions I take now will prevent infinite suffering on the part of others. This is partly because, when I adopt a utilitarian concern for all sentient organisms, I can prevent infinite suffering by preventing finite amounts of suffering on the part of infinitely many organisms, not just suffering of infinite duration on the part of one particular organism.

Expand full comment

Why don't believers follow through on the implications of Pascal's wager? If they truly love their fellow man and believe that their fellow man is in danger of eternal torment, how can they ever spend one spare moment not proselytizing?

I'd be much more likely to be swayed by an argument if I saw it applied consistently: to those among the saved as well as those still in danger.

Expand full comment

As the expected computational capacity of our world, conditional on its 'basement' status, goes down, the probability that it is a simulation in a world with more computation-friendly laws goes up, and at some point appealing to the Dark Lords of the Matrix has a better expected value than the alternative. The laws of physics in our world (laws of thermodynamics, relativity, etc) do not seem conducive to absurd (10^^^^^^^^^^^^^^^^^^^^^10) numbers of computations, and it seems plausible that a relevant fraction of worlds in Tegmark's ensemble are much more computation-friendly.

Expand full comment

Here's a final note on Carl's hell-escape scenario. Whether the idea works seems to depend on whether one endorses causal decision theory, evidential decision theory, or something in between. My intuition lies strongly with causal decision theory (since probabilities are in the mind and changing your beliefs about who you are doesn't actually change who you are), but there appears to be a large literature on this debate, and I don't doubt that the evidential decision theorists have some good arguments.

steven, interesting point. However, it's not clear to me why the egoist case should parallel the utilitarian one. I would think an egoist would care only about his own particular instantiation, not all of the instantiations of himself that might be run. I guess this gets back to the Unification vs. Duplication discussion above.

Expand full comment

I put an argument on my blog against simulations being very practically relevant: Strike the Root

Expand full comment

I didn't assert that infinite utility is impossible. My point was that because human brains are finite, they naturally calculate according to a bounded utility function.

This doesn't mean that I'm saying the wager is wrong, but that normal humans cannot accept it, because their brains do not work that way. If someone based on some theory believes that we should act as though we had unbounded utility functions, he is trying to get around his own brain, and he may well consequently accept the wager (or some variant, as Robin suggested.)

Expand full comment

Carl: U1 and U2 are psychologically identical, so their decisions will be correlated.

Where does this correlation come from? If you mean that U1 and U2 have psychologically indistinguishable histories up to the present, that implies nothing about their correlation in the future, does it? U1 and U2 were picked from all the mind-histories in the universe as two that happened to share the same history up to this moment. But unless there's some causal mechanism correlating them, why does that tell us anything about future moments? Or we could say that we chose U1 and U2 to be two mind-histories that are identical in both the past and the future, but there are lots of other mind-histories that are identical in the past only, and U1, U2, ... would have no way to know that they aren't one of those. You can't get a relevant correlation by "data mining" of random noise -- somewhere causation has to be involved.

One causal mechanism is that U1 is programmed to copy whatever U2 does. Yes, there's correlation, but it's in the wrong direction to help U1.

What am I missing here?

Nick: It seems to me that you should consider yourself as the set {U1, U2, ...} = {all systems having this experience}, not as one member.

Does that commit you to the position that Nick Bostrom calls "Unification" on p. 186 of this piece? If so, what do you think of his arguments against Unification in the subsequent three pages? If not, perhaps you could elaborate your position further, or point me to a reference?

Robin: The Christian God scenario doesn't weigh in my beliefs much larger than many other possible gods, but I suspect I may well in principle succumb to some other wagers-to-placate-gods.

Can you think of other wagers that you find compelling? I'd be quite interested to hear them.

Expand full comment

U1 and U2 are psychologically identical, so their decisions will be correlated.

Expand full comment

U1 and U2 are by stipulation the same, except that one is in a simulation, so no Predictor is necessary.

It seems to me that you should consider yourself as the set {U1, U2, ...} = {all systems having this experience}, not as one member.

I agree with Carl here, at least assuming decision theory works normally in a Big World.

Expand full comment

Thanks for the clarification. I'm still left with this question, though: who plays the part of the Predictor, ensuring that if U1 commits to simulating himself given the chance, U2 will do so also?

In Newcomb, we may not understand the mechanism by which choosing only one box guarantees the $1,000,000, but we know that one-boxing has to work because the Predictor is always right. Where is the analogy in the hellfire case? Why do we know that "one-boxing" (committing to simulations) has to do anything?

If I'm U2, then perhaps I can affect U1 because U1 is a simulation of U2. But if I'm U2, I'm already not being simulated to be tortured....

Expand full comment

"I'm not sure where U2 comes from."It's a Big World, and various entities are in different regions with identical experiences. In some regions instances of you can access great power and in other regions instances of you are being simulated.

"Does your proposal depend on your preferred answer to Newcomb's paradox -- the two answers being (1) take one box or (2) take both boxes? What if you're the kind of person who prefers answer (2)? The analogue of that position here, I guess, would be to point out that you already are one of U1, U2, ..., UN, and you can't change that, regardless of what commitments you make or simulations you run."

Well, (1) is the right choice, and (2)s will enjoy their poverty and hellfire while (1)s laugh their way to the bank. For Newcomb's everyone agrees that when you take one box you can expect riches, even before you open it. Likewise, here you can expect success in avoiding hellfire if you commit yourself to simulating. If eternal suffering is not enough to get you to play to win , then what is?

Expand full comment

Nick and others,

Carl's suggestion about gaining access to vast simulation resources is as follows (correct me if I'm not explaining it accurately). Suppose there are several possible copies of me, all with the same subjective experiences up to some point in the future. Call copy number 1 of Utilitarian "U1," copy number 2 "U2," etc. Carl elaborates:

[Suppose] U1 is being simulated by a sadistic Yahweh-impersonator for infinite torture. U2 is in a world where he can access vast computational resources. If U2 conducts mass simulations, then vast numbers of beings, U3 through UN, will be created with experiences identical to the earlier experiences of U1 and U2.You don't know whether you are U1, U2, or UN. If U2 does no simulations of his own past history, then you have a 50% chance of being U1 (we are ignoring other worlds and simulators here). If U2 does conduct the simulations then your chance of being tortured U1 is infinitesimal. You and U2 are initially psychologically identical, so if you turn out to be the sort of person who would create simulations in U2's place, then U2 is also the sort of person who will create simulations (we can deal with many-worlds considerations by making everything probabilistic, let's not worry about it here). If you then steel yourself (swear oaths, undergo strong conditioning, etc) to simulate in the future if you ever get the chance, then your expected chance of suffering torture is infinitesimal. You are in the position of the winner of Newcomb's problem [i.e., someone who finds out that he's the kind of person who would only take one box and therefore gets $1,000,000].

Carl, I'm not sure where U2 comes from. If I'm U1, my simulator won't create a copy of me and give it vast computational resources, nor will I, as U1, be able to get access to such resources in order to simulate U2 myself, right?

As for the argument, I need to ponder it some more, and I look forward to hearing others' comments. But here are some initial questions:

Is this an exact analogy to Newcomb's paradox? If so, who plays the part of the Predictor, ensuring that if U1 commits to simulating himself given the chance, U2 will do so also? (The parallel to Newcomb is that, if I commit to only taking one box, it must contain the $1,000,000 because the Predictor is never wrong.)

Does your proposal depend on your preferred answer to Newcomb's paradox -- the two answers being (1) take one box or (2) take both boxes? What if you're the kind of person who prefers answer (2)? The analogue of that position here, I guess, would be to point out that you already are one of U1, U2, ..., UN, and you can't change that, regardless of what commitments you make or simulations you run.

Suppose you're U2. It's true that, after you run N-2 simulations, you no longer can tell which of U1, U2, ..., UN you are. But causing yourself to become more uncertain doesn't change your actual state. I can give myself brain damage and thereby become less certain of who I am, but that doesn't change who I am....

Expand full comment