Where Does Pascal’s Wager Fail?

The topic of Pascal’s wager has been mentioned several times before on Overcoming Bias, most notably in Eliezer’s post on Pascal’s mugging. I’m interested in discussing the question with specific reference to its original context: religion. My assumption is that almost all readers agree that the wager fails in this context — but where exactly?

One common objection is the many gods argument: While it’s true you might be punished eternally (or, if you like, for 3^^^^3 years) if and only if you don’t follow, say, Christianity, it’s possible to imagine other scenarios where you would be punished if and only if you do follow Christianity; thus, it’s claimed, the different possibilities cancel each other out. In responding to the Pascal’s-mugging post, Michael Vassar suggested that we should have Equal priors due to complexity, equal posteriors due to lack of entanglement between claims and facts. But are the priors really equal? Intuitively, the anti-Christian God should take more bits to describe, since that hypothesis requires stating the entire concept of Christianity and then a little extra. I don’t know if that’s the case, but my point is it’s not obvious that, bit for bit, the hypotheses are identical in Kolmogorov complexity. Moreover, the set of relevant hypotheses is bigger than these two: There are tons of hypotheses according to which whether you follow Christianity will make a difference as to whether you suffer for 3^^^^3 years, and I’m not convinced that they all exactly cancel out in prior probability.

Moreover, is there really no entanglement? Is the probability of observing the world we do exactly the same given Christianity as given anti-Christianity? Is the probability, given Christianity, that billions of people would be persuaded of the truth of the Christian God’s message exactly the same as the probability, given anti-Christianity, that billions of people would be fooled into believing the Christian God’s message? Or for that matter, are the probabilities that billions of people will follow non-Christian religions equal under the two scenarios? And so on. There seems to be just too much data in the world for our probabilities to remain symmetric.

This relates to a second complaint about the wager: With vast amounts of data to process and an enormous space of possible religious hypotheses to search, Pascal’s wager (which is just an optimization problem) is computationally infeasible, especially for human minds. This is true, but even if we can’t find the global optimum (if one exists?), I don’t see why we shouldn’t make what local improvements we can, given our limited knowledge, processing ability, and creativity in specifying hypotheses. Just by considering a few basic factual predictions that various religions make, for example, it ought to be possible to separate hypotheses of similar prior probability by many orders of magnitude in their posteriors. We could make some progress on these back-of-the-envelope calculations even without having a full Solomonoff-inducting AI (though the latter would indeed be extraordinarily helpful).

In view of the high uncertainty surrounding the question of which religion (possibly including atheism) to choose, maybe it would be best to avoid making a commitment now, since you might learn more as time goes on that would affect your choice. Moreover, there’s a small chance that in trying to adhere to the commands of a particular religion and in surrounding yourself with fellow believers, you might blunt your ability to think rationally. This argument is fine as far as it goes (though you should also consider your probability of dying before you finally do make up your mind), but then why not spend considerable effort doing further research on the question of which religion to follow? The expected value of additional information would seem to be extraordinarily high.

You might reply that the problem of which religion to follow is overly narrow: There are lots of other projects to work on, perhaps involving more probable scenarios than does Pascal’s wager. For instance, maybe you’re aiming for physical immortality via ordinary materialist means and intend to spend all your time researching how best to stay alive until significant anti-ageing technologies kick in. Fair enough, but what if — as is true in my case — you’re more concerned about avoiding eternal suffering, rather than achieving eternal blissful life? Are there secular scenarios that would require you not to consider Pascal’s wager in the religious case in order to prevent yourself from experiencing massive amounts of suffering?

Finally, some might object to using an unbounded utility function because it leads to mathematical difficulties. I admit that I don’t like the idea of bounding utility functions, but even if we do that, can we not take the bounds big enough that we still allow speculative Pascalian scenarios to dominate over more minor, worldly considerations?

GD Star Rating
a WordPress rating system
Tagged as:
Trackback URL:
  • http://uncommon-priors.com Paul Gowder

    Well, here’s one line that you haven’t mentioned: most interpretations of Christianity hold that sincere belief is required, and most people think direct doxastic voluntarism is false — that is, it’s impossible to directly decide to hold a sincere belief.

    That’s not a complete answer, because indirect doxastic voluntarism might be true — that is, you might be able to manipulate your life so that you come to adopt that kind of belief, i.e., by joining the church, hanging out with believers, etc., but then you’re back to the choice problem in sharper form (that is, if you think you’re going to become a committed believer of a religion by adopting its practices, then you can’t even try on different religions for investigational purposes).

    As for the research point, the expected value of additional information is high only if you think there’s a non-infinitesimal chance that an additional bit of religious information will lead to changed beliefs.

  • steven

    For just one thing, if this sort of reasoning is legitimate then why not take it a step further and stop worrying about any finite number of people going to hell?

  • steven

    For another just one thing, “God punishes you for doing whatever would be immoral in the absence of afterlife incentives” seems less improbable than Christian morality.

  • http://www.mccaughan.org.uk/g/ g

    Paul, Pascal’s original version of the argument was for indirect belief-manipulation, not direct.

    Utilitarian, it seems to me that the many-gods and computationally-infeasible arguments between them completely sink the argument. It’s not just ChristianGod and AntiChristianGod that you have to consider, of course (consider, e.g., Steven’s another-just-one-thing, which I’m inclined to agree with) and I see no reason to think that any amount of research will determine with much confidence which of the vastly many possible gods contribute most to the expected utility differences. If you take infinite utilities seriously, it seems like you’re faced with a whole lot of infinite alleged contributions to those differences, and the most important thing in balancing them out might be not the relative probability of the gods involved but the exact hotness of their hells and the exact felicity of their heavens. And, e.g., whether the eternity of those heavens and hells are really assured conditional on the gods’ existence; many versions of Christianity, for instance, say that once upon a time there was a war in heaven that changed things greatly; how confident can a Christian who believes that really be that there won’t be another?

    Do you apply any discount factor to future utilities? If you do, then it’s far from clear that ChristianGod really offers/threatens an infinite utility difference. If you don’t, then e.g. the long-term consequences of any action of yours are likely to be dominated by unpredictable amplified randomness, thanks to chaos theory and all that.

    Oh, and there’s no law that says you have to be a personal expected-utility maximizer anyway.

  • Robin

    I don’t buy Pascal’s Wager for a simpler reason. Belief in Christianity requires that I suspend logic in favor of dogmatism. That’s a pretty big sacrifice and I’d find life (and presumably heaven) pretty hard to live if I always had to reject reason.

    Aside from that the wager is essentially asking you to believe in nonsense to achieve eternal life and happiness, an offer only a conman would present and only a fool would accept.

  • Ben Jones

    why not spend considerable effort doing further research on the question of which religion to follow? The expected value of additional information would seem to be extraordinarily high.

    Don’t like this bit much. How about if I postulate a religion that says that my afterlife is 3^^^3 times more blissful than the Christian/Muslim/whatever heaven/equivalent, and 10x easier to get into? That would surely blow them all out of the water, even accounting for the fact that it only has one adherent, compared to hundreds of millions. Maths doesn’t work here.

    Computing utility around arbitrary measurements of bliss/rapture is overcomplicating matters. If the basis of the argument is ‘well it’s probably nonsense, but the massive utilities involved make it worth considering’ then I’d rather spend my time in the pub.

    Thought experiment: imagine that Christianity in its current form were the only religion in the world, and the majority of the world was composed of believers. Would you embrace it happy in the knowledge that you’re quite safely maximising your expected utility?

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Probability is representation of evidence. “Data” that you mention is hardly evidence for these hypotheses. You can’t extract knowledge from zero evidence even by Kolmogorov’s magic trick. Occam’s razor works by exploiting existing notation, by converting evidence implicit in notation into probability. In this case, one doesn’t expect any evidence at all, and so also no evidence implicit in notation.

  • J Thomas

    Pascal’s wager does work when you’re playing bridge. When you don’t know which opponent holds which cards, and you lose unless the cards come out right, you might as well play as if they do come out right. You have nothing to lose.

    But to do that you have to know the rules of the game. You have to know how the things you don’t know fit into the bigger pattern.

    If there are only two choices, either God is exactly as represented or there is no afterlife at all, then it works. And the most obvious third choice, that God is some sadist who would tell people what he wants and then extinguish them or torture them forever when they do it, is not worth considering. A god like that might easily torture you forever regardless. You could spend ten thousand years in Heaven agonizing over just how to avoid getting thrown into Hell *today*, and then one day God decides you kissed his toenails a little too fervently and you’re in Hell for eternity anyway.

    If you could be sure it was the Christian god or nothing, then it would be a good bet.

    And if you’re the kind of person who’d take that bet, and God doesn’t want that kind of person in Heaven, well, He wouldn’t have wanted you anyway.

  • steven

    It also seems to me there are several possible future technologies that have a (very) small chance of creating “massive amounts of suffering”, but nobody even among transhumanists seems to worry about this much, probably because they’re not negative-leaning utilitarians (and I’m not sure either way whether they should be). So that’s yet another argument why even a pure negative-utilitarian shouldn’t waste his “rational capital” by believing in God.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    If you follow my religion, you get eternal bliss in the afterlife plus a pony.

  • steven

    And then there’s game theory to take into account… “every time you stop masturbating because God might kill a kitten, Allah sees evidence that it works and threatens to kill a puppy”.

  • http://occludedsun.wordpress.com Caledonian

    Pascal’s Wager fails for a very simple reason: justification is required to assert a proposition.

    If we have no grounds for postulating that following a course of action will benefit us, we are equally compelled to postulate that following the course of action will harm us. The two cancel, like equal weights on either side of an old-fashioned scale – their torques are equal and opposite, resulting in no net effect. We need reasons to believe one outcome is more likely than the other before we can be inclined towards any conclusion, before the scale can be tipped one way or the other.

    If you ignore parts of the possibility space at will, you can incline yourself any way you like – just as you can prove whatever you wish if you choose the right premises. Any game can be ‘won’ if you ignore the rules.

  • Marshall

    This is a logical puzzle – and as such fine, but it seems to be a mistake to conflate the puzzling with the accidental content. No mature intelligence would waste its resources on the conflation. And this makes me wonder, whether a young artificial intelligence would suck up the world’s energy on such and other puzzles? And while we’re at it – I am also convinced that morality has no practical importance in the world of mature adults – but maybe our interface with these weird and wonderful machines Eli dreams of requires the voice of the gods. But if intelligence reaches beyond its bounds – how long will such a morality last?

  • http://www.utilitarian-essays.com/ Utilitarian

    Thanks for the comments!

    Paul: most interpretations of Christianity hold that sincere belief is required, and most people think direct doxastic voluntarism is false — that is, it’s impossible to directly decide to hold a sincere belief.

    To some extent, Pascal’s wager (especially as I’ve formulated it) is just an intellectualized version of fearing hell, which I think many Christians agree is a fine place to start “seeking God,” as they say. You may be right that many versions of Christianity require going further and convincing oneself to hold a sincere belief, but presumably there are some versions that don’t. This is especially true when we consider the space of all variants of Christianity, not just those that people actually believe. (Variants of Christianity make almost the same predictions as the class of hypotheses ordinarily regarded as “Christianity” and hence have similar likelihood ratios.)

    Finally, if you think Christianity requires sincere belief in a way that would be an obstacle, why not choose another religion that doesn’t?

    steven: if this sort of reasoning is legitimate then why not take it a step further and stop worrying about any finite number of people going to hell?

    Actually, that’s an excellent point; I’ll think about it further. Though I personally don’t see it as a reductio.

    I guess I’ve been discussing Pascal’s wager in the egoist context of wanting to prevent oneself from going to hell, but the utilitarian question of how to prevent the maximum expected number of people from going to hell is also interesting. This introduces problems of infinity, but not obviously any more than any other utilitarian question. If you like, we could think only about finite numbers penalized by Kolmogorov complexity, as Eliezer did in the “Pascal’s Mugging” post.

    “God punishes you for doing whatever would be immoral in the absence of afterlife incentives” seems less improbable than Christian morality.

    I’m not sure if that implies you’re better off being an atheist than a believer in afterlife incentives (perhaps believers in an afterlife could still refrain from doing those things that would be immoral even in the absence of afterlife incentives?), but if it does, your version of Pascal’s wager places enormous expected value on convincing people to reject belief in an afterlife. A utilitarian with that view might find it extraordinarily cost-effective to convert as many people as possible to atheism. That needn’t be a reductio, just an implication.

    g: the most important thing in balancing them out might be not the relative probability of the gods involved but the exact hotness of their hells and the exact felicity of their heavens. And, e.g., whether the eternity of those heavens and hells are really assured conditional on the gods’ existence; many versions of Christianity, for instance, say that once upon a time there was a war in heaven that changed things greatly; how confident can a Christian who believes that really be that there won’t be another?

    Those are all important considerations. It doesn’t seem impossible to consider at least a few scenarios we’re able to imagine (e.g., those that you and others suggested) and come up with the best answer we can, even if it’s not the best overall answer.

    Suppose we had an AI apply Solomonoff induction to the question and come up with an answer. Is there a reason an expected-utility maximizer shouldn’t follow that answer? (It ought to be possible to separate the question of computational feasibility from the question of whether the wager, given knowledge of the computed answer, would be valid.)

    Do you apply any discount factor to future utilities? If you do, then it’s far from clear that ChristianGod really offers/threatens an infinite utility difference. If you don’t, then e.g. the long-term consequences of any action of yours are likely to be dominated by unpredictable amplified randomness, thanks to chaos theory and all that.

    Even if you apply discount factors, it still might be worth checking that the (now finite) expected cost of hell is sufficiently outweighed by other considerations, but it’s true that doing this calculation wouldn’t be nearly so urgent.

    I prefer not to apply any discount factor to future utilities, apart from probabilistic discounting. Yes, this does lead one’s decisions to be highly sensitive to small probabilities of vast utility consequences, but this is a general symptom of non-discounting utilitarian calculations, not just Pascal’s wager.

    Robin: That’s a pretty big sacrifice and I’d find life (and presumably heaven) pretty hard to live if I always had to reject reason.

    The cost in this life is finite — and presumably less than (some small probability differential) * 3^^^^3. (That doesn’t make the wager easy to follow in practice, of course, but neither are other potentially optimal courses of action!)

    Would heaven-with-dogmatism not still be preferable to hell?

    Ben Jones: How about if I postulate a religion that says that my afterlife is 3^^^3 times more blissful than the Christian/Muslim/whatever heaven/equivalent, and 10x easier to get into? That would surely blow them all out of the water, even accounting for the fact that it only has one adherent, compared to hundreds of millions. Maths doesn’t work here.

    Of course, Christians can claim that that their hell is 3^^^^3 times worse than yours (although none of them has yet…). Your statement is worth considering as part of Pascal’s wager, though.

    Thought experiment: imagine that Christianity in its current form were the only religion in the world, and the majority of the world was composed of believers. Would you embrace it happy in the knowledge that you’re quite safely maximising your expected utility?

    Well, under those circumstances the likelihood ratio of Christianity relative to other hypotheses would probably be higher than it currently is, which would strengthen the appeal of the argument. I’m not sure if there would be anything fundamentally different about the wager, though. (Were you thinking there would be?)

    Vladimir: In this case, one doesn’t expect any evidence at all […].

    It’s not obvious to me that there isn’t any entanglement between the hypotheses under consideration and the state of the world; if there is entanglement, that’s evidence.

    This objection applies equally well against Pascal’s mugging. Here’s a quote from that discussion:

    Nick Tarleton: Tom is right that the possibility that typing QWERTYUIOP will destroy the universe can be safely ignored; there is no evidence either way, so the probability equals the prior, and the Solomonoff prior that typing QWERTYUIOP will save the universe is, as far as we know, exactly the same.

    Eliezer Yudkowsky: Exactly the same? These are different scenarios. What happens if an AI actually calculates the prior probabilities, using a Solomonoff technique, without any a priori desire that things should exactly cancel out?

    Caledonian: If we have no grounds for postulating that following a course of action will benefit us, we are equally compelled to postulate that following the course of action will harm us.

    Agreed, but the question is whether we have any grounds for asymmetry. Suppose someone tells you that ingesting Chemical XYZ will cause you severe bodily harm. Are your posteriors for Chemical XYZ causing harm vs. causing benefit still equal?

  • http://occludedsun.wordpress.com Caledonian

    Agreed, but the question is whether we have any grounds for asymmetry. Suppose someone tells you that ingesting Chemical XYZ will cause you severe bodily harm. Are your posteriors for Chemical XYZ causing harm vs. causing benefit still equal?

    Depends on who’s telling me. I need a good reason to believe that their claim is justified. They say they have knowledge – how did they acquire it? Ultimately a claim about X can only be justified by observation of X, and nothing else. Every intermediate assumption I make in substitution of actual evidence weakens the strength of my conclusion. Is the personally generally trustworthy? How do I determine that? Are they trustworthy on this particular topic? How do I determine that?

    In most Pascal’s-esque situations, the people making claims are considered to be authorities by the fact that they’re considered to be authorities. The general belief has no actual support, just an ancestral chain of credulity that stretches back into the mists of time. It’s turtles all the way down, so to speak.

    The fact that someone makes a claim, by itself, does not constitute evidence of the claim.

  • http://jewishatheist.blogspot.com JewishAtheist

    If you’re going to go by Pascal’s Wager, you should pick your religion carefully:

    Based on careful consideration, I recommend a branch of Christianity which has 1) a large number of followers, 2) a very good Heaven and a very scary, infinite Hell, 3) an intolerant God, 4) and a relatively easy path to Heaven. For example, a Christian denomination that demands only that you believe in Jesus seems perfect.

    Funny how most people who use Pascal’s Wager want you to accept THEIR religion rather than choosing the best bet.

  • Carl Shulman

    If you think that Nick Bostrom’s Simulation Argument is correct, then trying to create a positive singularity resembles Pascal’s wager (you’re unlikely to actually get access to vast amounts of future computational power, since you are almost certainly a simulation).
    Utilitarian,

    As we’ve discussed, (but other readers have not), you’re really talking about an intelligence capable of generating arbitrarily large punishments (or rewards) whose decisions lead to vast utility from following/believing in Christianity. The supposed God of the Bible and Christian philosophy itself has other extremely improbable features that penalize the hypothesis of its existence relative to Dark Lords of the Matrix that happen to reward Christianity, e.g.

    1. Biblical errors of basic geology, biology, and mathematics, plus linguistic/textual evidence, showing that the scriptures were not written by a supreme being attempting to convey accurate information, or by people divinely inspired for accuracy.
    2. Contradictions about the motivations and values of the entity in scriptures.
    3. The Christian God supposedly has no creator and has always existed, imposing an astronomical probability penalty relative to superintelligences descended from evolved beings in universes with laws of physics permitting BIG Simulations.
    4. The God of the Bible is happy to demonstrate its existence with incontrovertible miracles, so the discovery that miracles no longer happen when they can be checked for fraudulent origin (no casual miraculous resurrections of the dead such as Lazarus, or even miraculous healing of amputees) is significant Bayesian evidence against it. Likewise for ‘Problem of Evil’ concerns, and the failure of prophecy.

    The take-away is that, conditional on there being arbitrarily vast punishments for failing to believe that Jesus is the son of an omnipotent, omniscient, uncreated deity, Jesus is almost certainly not such.

    Further, ‘Atheist Gods/Superintelligences’ which punish attempts to induce radical self-delusion have more plausible motivations relative to Christianity-incentivizing ones. Values of rationality in belief and retributivism (ingredients for Atheist God) may be found in many different beings, but why would superintelligences seek to reward created beings for deluding themselves into believing a religion that is obviously false to both the superintelligences and the created beings?

  • Tim Tyler

    A Christian telling me that I should believe – or risk burning in hell – does not cause me to update my beliefs: they are obviously untrustworthy.

    Would I like to be infected by the virus of faith? Thanks – but, no thanks!

  • http://profile.typekey.com/utilitarian/ Utilitarian

    Carl: The take-away is that, conditional on there being arbitrarily vast punishments for failing to believe that Jesus is the son of an omnipotent, omniscient, uncreated deity, Jesus is almost certainly not such.

    Maybe, but that wouldn’t affect Pascal’s wager at all, would it?

    Further, ‘Atheist Gods/Superintelligences’ which punish attempts to induce radical self-delusion have more plausible motivations relative to Christianity-incentivizing ones.

    Interesting point. As you’ve suggested before, it’s definitely worth thinking more about the space of possible simulators. Still, I can imagine simulators who might want to “play God” and actually bring about in a simulated world the types of religions that their ancestors had once believed.

    I suppose this is a question we can potentially get a fair amount of insight into by asking other people, since if we are a moderately accurate ancestor simulation, then the inclinations of those around us give some indication of the inclinations of our simulators. So here’s a poll for the group: If you were to punish your simulations for something (and I really hope you wouldn’t!), what would it be?

  • http://profile.typekey.com/RobinZ/ Robin Z

    You know what my take on it is? Pascal’s Wager fails because it’s bullshit.

    No, really. It’s obviously bullshit. It’s precisely the sort of thing that someone would make up to manipulate a fellow. Therefore, the only feasible solution is to dismiss the claim unless it’s accompanied by credible evidence.

  • Carl Shulman

    “Still, I can imagine simulators who might want to “play God” and actually bring about in a simulated world the types of religions that their ancestors had once believed.”
    They wouldn’t do this because of belief in the religion of course, since on the hypothesis of the religion’s truth the activity would be pointless. We’re talking about ultra-sadists who find it amusing to consign people to indefinite torture based on arbitrary criteria: non-Christians, non-Thorists, redheads, people named George. Given this, why would you expect number of followers or prima facie probability to predict the likelihood that a religion is a safe haven? If the entity was rewarding assignation of credence according to evidence, then it would reward atheism, and if it was not, then why prefer the most prima facie probable of Christianity, Mormonism and Scientology?

  • http://occludedsun.wordpress.com Caledonian

    It’s obviously bullshit.

    Of course! But what are the points you check to make that determination?

    If we want to increase our understanding, noting that we find something obvious doesn’t help. How do we find it obvious?

  • Aspiring Vulcan

    What about a “Yudkowsky-God” who sends you to heaven only if you are a Bayesian? Wouldn’t Bayes’ Rule be a lot simpler to explain than all of Christianity?

  • http://profile.typekey.com/RobinZ/ Robin Z

    Caledonian 02:04 PM: We find it obvious because it’s clearly a pattern we can make up to fit any situation. “What if washing the dishes will cause the water pressure in the pipes to drop at just the wrong time and make the nuclear power plant melt down?” “What if God really hates cows, and will only let people into Heaven if they kill lots of them?” “What if aliens are watching us, and will destroy the entire human species if we don’t stop killing each other?” The very ease with which we make up Pascalian wagers implies that we should expect to encounter lots of them, and that if we permit unsubstantiated claims of that form to influence our behavior, we should expect to be entirely enthralled to random forces.

  • http://occludedsun.wordpress.com Caledonian

    We find it obvious because it’s clearly a pattern we can make up to fit any situation.

    That’s not quite it. We find it obvious because we recognize that the argument works just as well for the negation of a claim as for the affirmation, and therefore it can’t tip the scale one way or another. If the point works against your argument as well as works towards, making the point has no effect on the implications of our statements.

    The argument has null content. That’s why you find it obvious, because the demonstration of that lack of meaning is so simple as to be trivial.

  • Tony
  • steven

    Once you think of God as a simulator, you can discount him from utilitarian calculations; if we’re in base reality, we can affect all simulations rather than just this one, so that’s what we should assume. Less sure what you’d have to do as an egoist though.

  • James Andrix

    I think the wager is missing a few o‘s.

    It returns a preference from a set of preferences and probabilities. It should stick with go o ds, not gods.

    Of course as with the many-gods problem you can have an infinite number of potential goods, but I think that leaves a high utility for finding out which good to value.

    (My impressions from wikipedia is that Pascal understood the limits of his argument better than the people I hear using it today. He had already ruled out all other gods, and didn’t intend it as a proof-of-fact. Also note that he died before he presented the wager himself, so he might have polished it up or even later rejected it. He seems to have been quite brilliant.)

  • http://www.transhumangoodness.blogspot.com Roko

    Utilitarian makes a good set of points here. There’s something fishy about pascal’s wager, though. The use of infinite utilities feels like a cheap trick to me. To give a sense of why this is the case, would utilitarian consider becoming a believer in my new religion:

    Roko’s religion: you must donate all of your worldly assets to me, otherwise you will receive aleph omega worth of negative utility when you die

    the bible specifies infinity in hell, but this presumably refers to the countable infinity of the natural numbers. Since my religion offers a strictly larger cardinal quantity of negative utility, you should convert from Christianity to my new religion regardless of the probability that I am making it up.

  • Tom Breton (Tehom)

    Pascal’s wager assumes that the expected value of an imperfectly-reliable promise as a function of the value of what is promised increases without bound. It does not.

    For example, say I promised you a sum of money to do something you don’t want to do – I used to use “paint your face blue” as an example. but that was before Blue Man Group. Also suppose (and I know this is a real counterfactual stretch) that I seemed somewhat unreliable – say I’d promised some stuff in the past and welched on it before. I offer you just enough money to be worth it if I came through, but I’m unreliable so you turn it down. Now I up the value of the offer. A million dollars to paint your face blue – but I’m probably going to welsh and laugh, so you turn it down. How about a billion? 3^^^3 dollars? You say your money-to-utility function is flat well before that point, so I offer 3^^^3 utiles, each equivalent to your marginal utility of one dollar. Anybody find that empty promise tempting? I know I wouldn’t.

  • Daniel Griffin

    Pascal’s Wager DOES NOT exist in a vacuum apart from how he presented it and set it up in the Pensees. Pascal writes that “we find ourselves in a state to be pitied,” with “too much evidence to deny and too little to be sure.” It isn’t as though you are a computer prepared to fill yourself with all information available. He posits you as an individual already at an impasse. You look around and can’t help but believe and yet at the same time don’t feel right sitting idly by and believing. The Wager arises after you have already computed all you have above and reached a point where you cannot decide. Then Pascal jumps in and reminds you that you must either choose or fail to choose to pursue a life of following the Christian God. Pascal also argues that without God the world is meaningless. So whatever possibility appears to YOU, an individual, you jump at the only avenue that allows for any hope.

    This clearly would not work as presented to a Buddhist or Muslim adherent in attempting to get them to convert to Christianity. The Wager presupposes all that background presented above. If someone is at an impasse and believes it is impossible to logically justify either choice (which may happen if something such as Divine Hiddeness holds) and this individual also recognizes that if the world did not contain God would be meaningless whatever they choice – the only choice available to them with any value would be to choose to pursue an avenue to attempt to believe (in an indirect doxastic fashion as Pascal himself mentions). Now, someone who grew up in the depths of the Amazon jungle (and was only aware of their tribal history/religion) the Wager would be meaningless. Pascal intended it for his friends who were at an impasse, NOT as a blanket tool for evangelism to anyone.

    Yours is a massive straw man argument. Perhaps I might argue that screwdrivers are useless because they (from my perspective and position) have been incapable of helping me construct my new SuperRideX’thousand bicycle (nevermind the fact that I forgot to mention that I have no screws let alone the parts to the SuperRideX’thousand. Similarly the Wager is often employed in arenas void of the prerequisites that Pascal presented. It is a tool to get beyond the impasse of doubt.

    Read the man himself!
    Though the readership of this blog at times astounds me in its depth of wisdom and capacity to understand the unapparent, I am not sure I would want you all to be on my jury, for fear that though I be given a chance to defend myself, you prefer instead to base your examination on mere hearsay of my own defense.

  • nick

    So, I suppose nobody here makes decisions by creating their own “wager” in their head.

  • nick

    im with daniel here

  • http://profile.typekey.com/utilitarian/ Utilitarian

    Roko, for what it’s worth, Georg Cantor identified the Christian God with absolute infinity, an admittedly ill-defined “mathematical” object supposedly bigger than all transfinite numbers (discussed further here). But I agree there’s something fishy going on with the “My infinity is bigger than yours” game.

    Still, this is a general problem for consequentialism that needs to be worked out. (Nick Bostrom’s paper on the subject can’t be cited enough!) As a hack to make things tractable, I’d be happy to start by just considering very large finite numbers penalized by Kolmogorov complexity, as Eliezer did in the Pascal’s-mugging post, and see where that gets us. Doing that seems better than doing nothing at all.

    Daniel, thanks for that context. As many philosophers do, I used the term “Pascal’s wager” as a general label for a class of philosophical problems about religion, without intending to imply anything about what Pascal himself said. For instance, while I consider avoidance of hell the central concern of the wager, Pascal was concerned only with heaven / union with God: “If you gain, you gain all; if you lose, you lose nothing.”

  • http://profile.typekey.com/RobinZ/ Robin Z

    Caledonian 02:24 PM:

    That’s not quite it. We find it obvious because we recognize that the argument works just as well for the negation of a claim as for the affirmation, and therefore it can’t tip the scale one way or another. If the point works against your argument as well as works towards, making the point has no effect on the implications of our statements.

    The argument has null content. That’s why you find it obvious, because the demonstration of that lack of meaning is so simple as to be trivial.

    This is a reason to be suspicious, but not what I was driving at. What I was driving at was that being susceptible to Pascalian wagers is a guaranteed losing strategy for obvious reasons, and therefore it doesn’t matter whether it constitutes evidence because it doesn’t constitute evidence you can act on.

    That is to say, unless you have other evidence that supports the claimed threat, of course – thank you, Mr. Griffin. For example, just because “get my brother out of jail or I’ll blow up City Hall” has the same form as Pascal’s Wager doesn’t make it equally meaningless, especially if the guy saying it has a box with his hand in it and there’s dynamite at his house. (Man, that was a good Dragnet episode.)

  • nick

    Robin “evidence of the claimed threat” any wager loses its meaning outside of context. this previous evidence and context is assumed. It seems quite obvious that if you assume something different the wager is pointless. Criticizing the wager outside of its context is just talking to hear yourself talk. refering to a lack of evidence shows you assume dogmatically something else.
    to understand something you have to often try it on in experimental fashion. refusing too doesn’t make you logical.

  • Alan

    Pascal’s Wager seems to have a taken on a life of its own. It is natural to think of Pascal’s Theorem or Pascal’s Triangle, and imagine that a wager attributed to him as an optimization problem. This framing of considering his so-called wager is, I think, a bias in itself. Curious, I dug out my French edition of his Pensees to check. Even at 350 years old, the language is relatively accessible and clear.

    He dedicates his work to God, and expressly says that this is not intended to refer to the god of the philosophers and savants. He is candid, personal, emotional. A glance at his table of contents reveals that his range of topics includes some means of believing (des moyens de croire), the misfortune of man without god (Misere de l’homme sans Dieu), miracles (Les miracles), proofs of Jesus Christ (Les prevueves de Jesus-Christ), and so on along that vein. He frames his section on the “wager” as Infini. Rien — (My [rough] translation): Infinity. Nothing. “Our soul is thrown into the body, where it finds number, time, dimension. It (the soul) reasons down here, and calls that nature necessity, and cannot believe otherwise.” The way he frames the wager, a person’s reason isn’t damaged whether choosing for or against the existence of god, since you necessarily have to choose. This may seem nonsensical, but consider it in context of his aphorism that the heart has its reasons that reason is not aware of. I don’t think he views choosing god’s existence as generating eternal ennui–quite the contrary–in fact, he writes against ennui. But he also gives a sober assessment, to the effect,

    “If one submits everything to reason, our religion won’t have anything mysterious or supernatural about it. But if one offends the principles of reason, our religion would be absurd and ridiculous.” *273-182, my transl.

    Back to the conclusion of his dedication:

    “Eternally in joy for a day of exercise upon the earth.”

  • http://profile.typekey.com/RobinZ/ Robin Z

    nick @ 08:57 PM: Sorry – replace “Pascal’s Wager” in the penultimate sentence with “the standard philosopher’s reduction of Pascal’s Wager”, which is, in fact, divorced of real-world evidence (see Roko 06:11 PM for an example). The question of whether Christianity is reasonable is, of course, completely inappropriate for this thread.

  • Yvain

    I find all of the standard tricks used against Pascal’s Wager intellectually unsatisfying because none of them are at the root of my failure to accept it. Yes, it might be a good point that there could be an “atheist God” who punishes anyone who accepts Pascal’s Wager. But even if a super-intelligent source whom I trusted absolutely informed me that there was definitely either the Catholic God or no god at all, I feel like I would still feel like Pascal’s Wager was a bad deal. So it would be dishonest of me to say that the possibility of an atheist god “solves” Pascal’s Wager.

    The same thing is true for a lot of the other solutions proposed. Even if this super-intelligent source assured me that yes, if there is a God He will let people into Heaven even if their faith is only based on Pascal’s Wager, that if there is a God He will not punish you for your cynical attraction to incentives, and so on, and re-emphasized that it was DEFINITELY either the Catholic God or nothing, I still wouldn’t happily become a Catholic.

    Whatever the solution, I think it’s probably the same for Pascal’s Wager, Pascal’s Mugging, and the Egyptian mummy problem I mentioned last month. Right now, my best guess for that solution is that there are two different answers to two different questions:

    Why do we believe Pascal’s Wager is wrong? Scope insensitivity. Eternity in Hell doesn’t sound that much worse, to our brains, than a hundred years in Hell, and we quite rightly wouldn’t accept Pascal’s Wager to avoid a hundred years in Hell. Pascal’s Mugger killing 3^^^3 people doesn’t sound too much worse than him killing 3,333 people, and we quite rightly wouldn’t give him a dollar to get that low a probability of killing 3,333 people.

    Why is Pascal’s Wager wrong? From an expected utility point of view, it’s not. In any particular world, not accepting Pascal’s Wager has a 99.999…% chance of leading to a higher payoff. But averaged over very large numbers of possible worlds, accepting Pascal’s Wager or Pascal’s Mugging will have a higher payoff, because of that infinity going into the averages. It’s too bad that doing the rational thing leads to a lower payoff in most cases, but as everyone who’s bought fire insurance and not had their house catch on fire knows, sometimes that happens.

    I realize that this position commits me, so far as I am rational, to becoming a theist. But my position that other people are exactly equal in moral value to myself commits me, so far as I am rational, to giving almost all my salary to starving Africans who would get a higher marginal value from it than I do, and I don’t do that either.

  • http://profile.typekey.com/utilitarian/ Utilitarian

    Yvain, thanks for the note. I think some of the solutions proposed here have been quite insightful and are worth considering further, yet I remain skeptical, in part for the reason you suggest: There’s a very strong tendency to want Pascal’s wager to come out invalid, because it can really be an unpleasant conclusion to come to, especially for people who have high hopes for the future of secular humanity, etc. This doesn’t prove that any of the objections are wrong (they might very well be valid), but they’re worth double- and triple-checking. It would be quite helpful to hear the opinion of an impartial inference machine, or at least a few people who don’t care at all how the calculation came out.

    Your mummy problem is a good one. Prima facie, it does seem we ought to keep the mummies where they are, other things being equal.

  • Ben Jones

    Me: imagine that Christianity in its current form were the only religion in the world, and the majority of the world was composed of believers. Would you embrace it happy in the knowledge that you’re quite safely maximising your expected utility?

    Utilitarian: Well, under those circumstances the likelihood ratio of Christianity relative to other hypotheses would probably be higher than it currently is, which would strengthen the appeal of the argument.

    Ask yourself whether or not it would strengthen the argument to the point that you’d become a believer. If the answer is no even in this exaggerated case, then you’ve found your answer to Pascal.

    To answer the comment from U just above, I’d say the unwritten (and very fuzzy) rule is that there are some eventualities with probability so small there’s little or no point worrying about them, no matter how big the utilities multiply out to. We can always postulate ridiculously big utilities on unlikely events, but the buck has to stop somewhere, particularly if your AI’s not going to go haywire.

    And for the love of [deity], EVERYONE STOP TYPING QWERTYUIOP! With probability 1, if we type it enough times it’ll produce a strangelet, and we all know there’s no god to save us.

  • conchis

    “Why is Pascal’s Wager wrong? From an expected utility point of view, it’s not.”

    Is there any chance that the problem is with expected utility maximization? It seems relatively easy to motivate maximizing expected utility when you get to play the game an infinite number of times, but if you only get to play it once, perhaps the case is less clear? What if we’re risk averse with respect to utility? That would seem to rationalize maximizing the expectation of any positive monotonic, but strictly concave function of utility. If this function had an upper bound, it would allow differences in probability to outweigh differences between utilities (even differences between infinite utilities?).

    As far as I can tell, this isn’t ruled out by the standard (Savage) justification of expected utility maximization, which is really just a bunch of consistency requirements and a representation theorem. (That is, the Savage axioms say that any consistent set of choices can be rationalized by some decision utility function, but there’s no reason to think that this function corresponds to a notion of experienced utility, which is what we’re talking about in the wager framework.)

    The difficulty with this is that it seems tough to justify any particular choice of risk-preference wrt (experienced) utility that doesn’t just boil down to reverse-engineering it from the conclusion that you wanted all along. E.g. it would seem difficult to know whether we’re really risk-averse in experienced utility or just scope insensitive (though the former might impose some further consistency requirements that scope insensitivity alone wouldn’t).

  • sophiesdad

    “Ask what “smarter-than-human” really means. As the basic definition of the Singularity points out, this is exactly the point at which our ability to extrapolate breaks down. We don’t know because we’re not that smart.–Eliezer Yudkowsky.

    So why are all these humans discussing what God might do?

  • Tony

    Pascal’s wager fails because the probability of being raptured into heaven is greater is you reject Pascal’s offer:

    Who is more likely to be accepted into the Judeo/Christian heaven?

    (1) Someone who, knowing pascal’s wager, makes fawning professions of love to an entity in which he does not believe?

    or

    (2) Someone who really truly couldn’t make himself believe after looking at the evidence?

  • Unknown

    Yvain has come closest to the truth here. As he states, the objections people make to the wager are ad hoc; they would reject it even if all the objections were known to be false.

    Why is this? As I’ve stated before, human beings naturally have a bounded utility function, and anyone who decides to act as though he had an unbounded utility function, is deciding to act like a fanatic. With this bounded utility function, there is little expected value from anything with a sufficiently low probability. But if someone were really willing to act as though his utility function were unbounded, he would become a fanatic… and he would accept the wager.

  • conchis

    Unknown: is there an argument as to why we can’t sum finite instantaneous utilities over an infinite time to generate infinite utility?

  • http://hanson.gmu.edu Robin Hanson

    On solid accepted theory, I’m a bullet biter. I accept that my decisions may be dominated by very small chances of very large outcomes. The Christian God scenario doesn’t weigh in my beliefs much larger than many other possible gods, but I suspect I may well in principle succumb to some other wagers-to-placate-gods.

  • Carl Shulman

    Utilitarian,

    I don’t claim that the ‘atheist God’ argument licenses ignoring the issue, or allows us to continue with our lives unaffected. Indeed, trying to make the world better in a way that would be very computationally demanding to simulate may approximate Pascal’s Wager. Rather, as we have discussed, I think that considerations about the distribution of simulators are important and that the weakness of the claim that the best way to reduce your expected suffering is telling yourself to believe you believe in Christianity means that you shouldn’t try to sabotage your rational faculties.

    Why hadn’t you come up with any of the arguments that I have introduced you to in the past, e.g. committing to simulate your own history in the future in worlds where you gain access to vast simulation resources? Do you think that the psychological and social effects of church attendance are not deforming your ability to assess hypotheses about the breadth of scenarios with absurd utilities? You continue to cite ‘absolute infinity,’ as though entities in worlds with laws of physics permitting the infliction of an ‘absolute infinity’ of suffering are more likely to reward Christianity relative to atheism than less powerful simulators. Why?

    Perhaps you will say that you are relying on a division of labor, since you can expect intelligent others to generate arguments against Pascalian religion much more often than ones in its favor, but this is only a tiny corner of the space of big-utility simulation/’our world is a lie’ hypotheses that doesn’t seem to justify your focus, independent of the psychosocial benefits, cultural exposure, etc. Why not just drop the claims of belief or belief in belief, keep going to services for the social benefit, and engage in a more open-ended study of this type of hypothesis?

  • Nick Tarleton

    If you think that Nick Bostrom’s Simulation Argument is correct, then trying to create a positive singularity resembles Pascal’s wager (you’re unlikely to actually get access to vast amounts of future computational power, since you are almost certainly a simulation).

    The Simulation Argument is for the disjunction of three propositions, only one of which is that we’re in a simulation. Also, the Singularity need not require vast amounts of computational power, compared to what would be necessary to simulate the world right now; an AI that judged we were probably in a simulation could decide not to expand too far.

    e.g. committing to simulate your own history in the future in worlds where you gain access to vast simulation resources?

    Do you mean Rolf Nelson’s UFAI-deterrence suggestion?

  • Carl Shulman

    “The Simulation Argument is for the disjunction of three propositions, only one of which is that we’re in a simulation.”
    The other two possibilities are not very credible to me.

    “Do you mean Rolf Nelson’s UFAI-deterrence suggestion?”
    No, this is another proposal of my own.

  • steven

    1 (we’ll almost certainly not end up with a posthuman civilization) is clearly false, but 2 (posthuman civilizations almost never create simulations) could be true.

    It’s worth noting that if we’re being simulated, then the simulator probably also thinks it’s being simulated, by something that thinks it’s being simulated, and so on.

  • http://profile.typepad.com/utilitarian Utilitarian

    Nick and others,

    Carl’s suggestion about gaining access to vast simulation resources is as follows (correct me if I’m not explaining it accurately). Suppose there are several possible copies of me, all with the same subjective experiences up to some point in the future. Call copy number 1 of Utilitarian “U1,” copy number 2 “U2,” etc. Carl elaborates:

    [Suppose] U1 is being simulated by a sadistic Yahweh-impersonator for infinite torture. U2 is in a world where he can access vast computational resources. If U2 conducts mass simulations, then vast numbers of beings, U3 through UN, will be created with experiences identical to the earlier experiences of U1 and U2.

    You don’t know whether you are U1, U2, or UN. If U2 does no simulations of his own past history, then you have a 50% chance of being U1 (we are ignoring other worlds and simulators here). If U2 does conduct the simulations then your chance of being tortured U1 is infinitesimal. You and U2 are initially psychologically identical, so if you turn out to be the sort of person who would create simulations in U2’s place, then U2 is also the sort of person who will create simulations (we can deal with many-worlds considerations by making everything probabilistic, let’s not worry about it here). If you then steel yourself (swear oaths, undergo strong conditioning, etc) to simulate in the future if you ever get the chance, then your expected chance of suffering torture is infinitesimal. You are in the position of the winner of Newcomb’s problem [i.e., someone who finds out that he’s the kind of person who would only take one box and therefore gets $1,000,000].

    Carl, I’m not sure where U2 comes from. If I’m U1, my simulator won’t create a copy of me and give it vast computational resources, nor will I, as U1, be able to get access to such resources in order to simulate U2 myself, right?

    As for the argument, I need to ponder it some more, and I look forward to hearing others’ comments. But here are some initial questions:

    Is this an exact analogy to Newcomb’s paradox? If so, who plays the part of the Predictor, ensuring that if U1 commits to simulating himself given the chance, U2 will do so also? (The parallel to Newcomb is that, if I commit to only taking one box, it must contain the $1,000,000 because the Predictor is never wrong.)

    Does your proposal depend on your preferred answer to Newcomb’s paradox — the two answers being (1) take one box or (2) take both boxes? What if you’re the kind of person who prefers answer (2)? The analogue of that position here, I guess, would be to point out that you already are one of U1, U2, …, UN, and you can’t change that, regardless of what commitments you make or simulations you run.

    Suppose you’re U2. It’s true that, after you run N-2 simulations, you no longer can tell which of U1, U2, …, UN you are. But causing yourself to become more uncertain doesn’t change your actual state. I can give myself brain damage and thereby become less certain of who I am, but that doesn’t change who I am….

  • Carl Shulman

    “I’m not sure where U2 comes from.”
    It’s a Big World, and various entities are in different regions with identical experiences. In some regions instances of you can access great power and in other regions instances of you are being simulated.

    “Does your proposal depend on your preferred answer to Newcomb’s paradox — the two answers being (1) take one box or (2) take both boxes? What if you’re the kind of person who prefers answer (2)? The analogue of that position here, I guess, would be to point out that you already are one of U1, U2, …, UN, and you can’t change that, regardless of what commitments you make or simulations you run.”

    Well, (1) is the right choice, and (2)s will enjoy their poverty and hellfire while (1)s laugh their way to the bank. For Newcomb’s everyone agrees that when you take one box you can expect riches, even before you open it. Likewise, here you can expect success in avoiding hellfire if you commit yourself to simulating. If eternal suffering is not enough to get you to play to win , then what is?

  • http://profile.typepad.com/utilitarian Utilitarian

    Thanks for the clarification. I’m still left with this question, though: who plays the part of the Predictor, ensuring that if U1 commits to simulating himself given the chance, U2 will do so also?

    In Newcomb, we may not understand the mechanism by which choosing only one box guarantees the $1,000,000, but we know that one-boxing has to work because the Predictor is always right. Where is the analogy in the hellfire case? Why do we know that “one-boxing” (committing to simulations) has to do anything?

    If I’m U2, then perhaps I can affect U1 because U1 is a simulation of U2. But if I’m U2, I’m already not being simulated to be tortured….

  • Nick Tarleton

    U1 and U2 are by stipulation the same, except that one is in a simulation, so no Predictor is necessary.

    It seems to me that you should consider yourself as the set {U1, U2, …} = {all systems having this experience}, not as one member.

    I agree with Carl here, at least assuming decision theory works normally in a Big World.

  • Carl Shulman

    U1 and U2 are psychologically identical, so their decisions will be correlated.

  • http://profile.typepad.com/utilitarian Utilitarian

    Carl: U1 and U2 are psychologically identical, so their decisions will be correlated.

    Where does this correlation come from? If you mean that U1 and U2 have psychologically indistinguishable histories up to the present, that implies nothing about their correlation in the future, does it? U1 and U2 were picked from all the mind-histories in the universe as two that happened to share the same history up to this moment. But unless there’s some causal mechanism correlating them, why does that tell us anything about future moments? Or we could say that we chose U1 and U2 to be two mind-histories that are identical in both the past and the future, but there are lots of other mind-histories that are identical in the past only, and U1, U2, … would have no way to know that they aren’t one of those. You can’t get a relevant correlation by “data mining” of random noise — somewhere causation has to be involved.

    One causal mechanism is that U1 is programmed to copy whatever U2 does. Yes, there’s correlation, but it’s in the wrong direction to help U1.

    What am I missing here?

    Nick: It seems to me that you should consider yourself as the set {U1, U2, …} = {all systems having this experience}, not as one member.

    Does that commit you to the position that Nick Bostrom calls “Unification” on p. 186 of this piece? If so, what do you think of his arguments against Unification in the subsequent three pages? If not, perhaps you could elaborate your position further, or point me to a reference?

    Robin: The Christian God scenario doesn’t weigh in my beliefs much larger than many other possible gods, but I suspect I may well in principle succumb to some other wagers-to-placate-gods.

    Can you think of other wagers that you find compelling? I’d be quite interested to hear them.

  • Unknown

    I didn’t assert that infinite utility is impossible. My point was that because human brains are finite, they naturally calculate according to a bounded utility function.

    This doesn’t mean that I’m saying the wager is wrong, but that normal humans cannot accept it, because their brains do not work that way. If someone based on some theory believes that we should act as though we had unbounded utility functions, he is trying to get around his own brain, and he may well consequently accept the wager (or some variant, as Robin suggested.)

  • steven

    I put an argument on my blog against simulations being very practically relevant: Strike the Root

  • http://www.utilitarian-essays.com/ Utilitarian

    Here’s a final note on Carl’s hell-escape scenario. Whether the idea works seems to depend on whether one endorses causal decision theory, evidential decision theory, or something in between. My intuition lies strongly with causal decision theory (since probabilities are in the mind and changing your beliefs about who you are doesn’t actually change who you are), but there appears to be a large literature on this debate, and I don’t doubt that the evidential decision theorists have some good arguments.

    steven, interesting point. However, it’s not clear to me why the egoist case should parallel the utilitarian one. I would think an egoist would care only about his own particular instantiation, not all of the instantiations of himself that might be run. I guess this gets back to the Unification vs. Duplication discussion above.

  • Carl Shulman

    As the expected computational capacity of our world, conditional on its ‘basement’ status, goes down, the probability that it is a simulation in a world with more computation-friendly laws goes up, and at some point appealing to the Dark Lords of the Matrix has a better expected value than the alternative. The laws of physics in our world (laws of thermodynamics, relativity, etc) do not seem conducive to absurd (10^^^^^^^^^^^^^^^^^^^^^10) numbers of computations, and it seems plausible that a relevant fraction of worlds in Tegmark’s ensemble are much more computation-friendly.

  • burger flipper

    Why don’t believers follow through on the implications of Pascal’s wager? If they truly love their fellow man and believe that their fellow man is in danger of eternal torment, how can they ever spend one spare moment not proselytizing?

    I’d be much more likely to be swayed by an argument if I saw it applied consistently: to those among the saved as well as those still in danger.

  • http://www.utilitarian-essays.com Utilitarian

    burger flipper, you make a very good point. I’ve been talking about Pascal’s wager mainly from an egoist perspective, but utilitarians certainly should be very concerned about their fellow man. While there are a number of missionaries out there, I think the reason we don’t see more is similar to the reason more people don’t give away most of their income to charity. Also, most Christians are not utilitarians.

    I think the utilitarian case for proselytism is somewhat weaker than the egoist case for personal conversion, because while I can imagine only a few scenarios other than hell according to which actions I take now will determine whether I suffer eternally, I can think of lots of scenarios in which actions I take now will prevent infinite suffering on the part of others. This is partly because, when I adopt a utilitarian concern for all sentient organisms, I can prevent infinite suffering by preventing finite amounts of suffering on the part of infinitely many organisms, not just suffering of infinite duration on the part of one particular organism.

  • Pingback: Overcoming Bias : The Problem at the Heart of Pascal’s Wager

  • SuperiorSavior

    The christian Bible does not explicitly state the infinite nature of hell, and this must be introduced; but if hell can be introduced to the tradition, why can’t it be introduced to support any notion, Marxism for instance, even if it didn’t itself originally include any reference to infinite suffering?

    It appears impossible to meaningfully decrease our probability of infinite suffering, if a god who would create hell exists, as we cannot know the criteria for avoiding hell even if we know the correct God, as we have no guarantee that heaven is an infinite respite from hell and as we have an infinite number of religions to choose between. No tradition appears consistent on the correct means of attaining salvation. A being capable of creating hell appears to my mind likely to continue damning people after they enter heaven. We have literally an infinite number of religions to choose between as religions which are unknown to man, such a non-evidentialist damning God or the god who really rally hates cows mentioned above, have just as much likelyhood of existing than religions not known to man (probably more so as such gods would be simpler, not having to have visited Earth) and the chance of choosing the correct religion would be rendered infantismal enough to counter even infinite reward or punishment.

  • joe ho

    Pascal Wager fail to work if God are true, Afterlife are true, Religion are all fake, and God will judge and reward us by what we do in the world.

    Say, if a person work for money and a person work for charity, you will appreciate the people work for charity more. Similarity, if you make charity for religion, and make charity but not for religion, god will appreciate which one ? probably the person without religion

    • IMASBA

      The idea is that you do not know the truth, so you only think in possibilities. The possibility you described should be filed under the list of possibilities that do not require belief, just like the possibility that that there is no higher power or the possibility that there is a higher power but that it does not differentiate between “good” and “bad” persons. It doesn’t add anything revolutionary to the idea, just makes it a little more detailed.

  • joe ho

    Pascal Wager will fail to work if god is true, afterlife is true and religion are all fake and god will judge our afterlife by what we do in the world.

  • Pingback: Erroneous Responses to Pascal | Entirely Useless