Life after death for Pascal’s Wager?

You are probably familiar with Pascal’s Wager – the idea that it is worth believing in God in order to increase your probability of going to heaven and lower your probability of going to hell. More generally, for an expected utility maximiser it will always be worth doing something that offers any probability of an infinite utility, no matter how low that probability.

My impression is that most folks think this argument is nonsense. I am not so sure. I recently met Amanda Montgomery, who is at NYU studying the challenges that infinite values present for decision theory. In her view, nobody has produced a sound solution to Pascal’s Wager and other infinite ethics problems.

A common response, and one I had previously accepted, is that we also need to consider the possibility of a ‘professor God’ who rewards atheists and punishes believers. As long as you place some probability on this being the case, then being an atheist, as well as being a believer, appears to offer an infinite payoff. Therefore it doesn’t matter what you believe.

This logic relies on two premises. Firstly, that a*∞ = b*∞ = ∞ for any a > 0 and b > 0. Secondly, that in ranking expected utility outcomes, we should be indifferent between any two positive probabilities of an infinite utility, even if they are different. That would imply that a certainty of going to ‘Heaven’ was no more desirable than a one-in-a-billion chance. Amanda points out that while these statements may both be true, if you have any doubt that either is true (p < 1), then Pascal’s Wager appears to survive. The part of your ‘credence’ in which a higher probability of infinite utility should be preferred to a lower one will determine your decision and allow the tie to be broken. Anything that made you believe that some kinds of Gods were more likely or easy to appease than others, such as internal consistency or historical evidence, would ensure you were no longer indifferent between them.

Some might respond that it would not be possible to convert sincerely with a ‘Pascalian’ motivation. This might be true in the immediate term, but presumably given time you could put yourself in situations where you would be likely to develop a more religious disposition. Certainly, it would be worth investigating your capacity to change with an infinite utility on the line! And even if you could not sincerely convert, if you believed it was the right choice and had any compassion for others, it would presumably be your duty to set about converting others who could.

On top of the possibility that there is a God, it also seems quite imaginable to me that we are living in a simulation of some kind perhaps as a research project of a singularity that occurred in a parent universe. There is another possible motivation for running such simulations. I am told that if you accept certain decision theories, it would appear worthwhile for future creatures to run simulations of the past, and reward or punish the participants based on whether they acted in ways that were beneficial or harmful to beings expected to live in the future. On realising this, we would then be uncertain whether we were in such a simulation or not, and so would have an extra motivation to work to improve the future. However, given finite resources in their universe, these simulators would presumably not be able to dole out infinite utilities, and so would be dominated, in terms of expected utility, by any ‘supernatural’ creator that could.

Extending this point, Amanda notes the domination of ‘higher cardinality’ infinities over lower cardinalities. The slightest probability of an infinity-aleph-two utility would always trump a certain infinity-aleph-one. I am not sure what to do about that. The issue has hardly been researched by philosophers and seems like a promising area for high impact philosophy. I would appreciate anyone who can resolve these weird results so I can return to worrying about ordinary things!

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Warren

    Shouldn’t that be 
    a > 1 and b > 1

  • http://profiles.google.com/tgdavies Tom Davies

    But isn’t there a cost to belief? At least the time spent in Church, possibly the constraints on one’s actions.

    • Robertwiblin

      But finite cost is trumped by infinite benefit.

      • Daniel

         As Dcj138 and Neal pointed out, this is not necessarily so.

  • Dcj138

    Why would you assume that heaven is infinite utility?  If you discount your future utility, then the present value of eternal life might very well be lower than the cost of participating in a religion.

  • SunTzu

    There’s several logical problems still.

    First, it is improbable for anyone to be convinced logically that any one deity or formal religious following surrounding such has a higher probability than any other one. It makes no sense even if we expect infinite, or even near infinite benefits to our minds, to accrue to then pick just one to the exclusion of others. We would instead have to expend a great deal of time on all forms of worship or faith. The wager is not an inherent argument for the Christian god or any other, it has logical implications for any religion (including the “professor god”). That logic still applies. One has still to “believe” and thus weight the scale with their thumb rather than wager the odds that they’ve bet on the correct horse because all of the probabilities have what appears to be equal logical terms. The formal problem is that there is no historical evidence or logical consistency to these metaphysical claims on which to break ties of which form to use or which god to appease.

    Second, it then has to be established that there is a being, event, or force that can dole out infinite benefits, and that the time this would occur is upon one’s death. I’m not sure how this makes one’s “mortal” life inherently useful. A utility maximizer might just kill themselves to shortcut to the benefits, and select a religious faith or system that does not penalise such action (or even claims to rewards it under specific circumstances), based on the precept of which god or system is “easier to appease”. Perhaps that is the correct course, but we have no basis to suggest that it is over other courses that suggest we should not do this or which instruct us to assist others rather than kill ourselves. Or worse, kill others or inflict suffering upon them.

    What we know in the form of religion is that some people have told us that there is such a being that can offer infinite benefits, but this is not an established feature of the universe. That we can imagine such a concept is not the same as there being a firm and useful possibility of infinite benefits based on evidence. In order for the probability of infinite benefit to exist, one has to first accept it is a possibility, and what we have available to suggest that is is only the religious faith’s conceits that it does.

  • http://kruel.co/ Alexander Kruel

    For some time now I consider that problem and others to be a reductio ad absurdum of rationality and ethics.

    If you just arbitrary reject certain wagers then what is left of your “rationality”  other than complete handwaving?

    Here is what lesswrong/OB style rationality boils down to:

    * All their methods are uncomputable.
    * The long term detriments of your actions are uncomputable.
    * A useful definition of utility seems impossible.
    * Your values are not static.
    * You cannot assign value in a time consistent way.
    * There exists no agreeable definition of “self”.
    * There are various examples of how those methods lead to absurd consequences.

    Even Eliezer Yudkowsky wrote that he would rather doubt his ‘grasp of “rationality” than give five dollars to a Pascal’s Mugger’. Which makes the whole “shut up and multiply” attitude sound like idle talk. And not just for ignoring Pascal’s mugging. I don’t see how building a friendly AI isn’t increasing the probability of a negative utility outcome.

    In other words, those people hardly seem to follow through with what they preach. And those who do wish they would have never come across those ideas in the first place: http://lesswrong.com/lw/38u/best_career_models_for_doing_research/344l

  • Hugh Parsonage

    I think you are being careless with the word “infinite”.

  • Neal

    Why should suffering or joy infinitely far in the future be weighted the same as suffering or joy now? Shouldn’t one discount when computing expected utility?

    Its also not clear to me that infinite utility — let alone uncountable utility — is even meaningful. We may as well suppose our gutility adopts values in some general ordered field, which would be, say, non-Archimedean.

  • http://economicme.wordpress.com Ioan Wigmore

    When one is concerned about being ‘judged’ by a higher power, but knows nothing about what reward or penalty will be handed out, or on what basis these will be handed out, and has no reasonable evidence for deducing the motivations of said higher power, then I’d argue the solution is not to worry about the judgement, if it is forthcoming at all.

  • http://overcomingbias.com RobinHanson

    If there is a small chance that vast powers watch us and will reward or punish us vastly for our actions, then yes we should think about which of them are more likely. It seems to me that reasonable powers, who reward us for our having our beliefs aligned with our evidence, are more likely than unreasonable powers. And if we are going to look to cultural myths about such powers as any evidence about them, we should look at the common features that most cultures posit, rather than the specific features that our culture posits. This seems to mostly come down to “be nice” in the usual human terms.

    • http://www.facebook.com/yudkowsky Eliezer Yudkowsky

      Perhaps atheistic FAI-equivalent programmers across the Tegmark Level IV multiverse get together and write AIs which would precisely counterbalance any attempt to reward or punish simulated behavior; thus nobody has an incentive to try to reward or punish simulated behavior in the first place, and Gods are cancelled out without ever being born.

      (I originally contemplated this as a method of actively enforcing a no-blackmail equilibrium – agents refusing to trade with any agent that attempts to blackmail any other agent, if there’s a collective interest in just having a blackmail-free multiverse for almost everyone’s convenience, i.e. a universe where nobody ever tries to negate anyone else’s utility function. The above specialization is rather tongue-in-cheek, but it does make a point about the difficulties of such claims.)

      • http://overcomingbias.com RobinHanson

        I wonder how close a substitute it is for the decision theorists over some wide scope to get together and recommend decision theories which refuse to be influenced by such threats or promises.

      • http://pulse.yahoo.com/_WWKNK6TCFZ4NVZJ3B7JIOHKO5M Anto

        Hey, that’s very interesting, to me! :)
        You talk about a mutual society based on recongnition of each other’s utility/humanity and equally exchanging?

    • Grognor

      It seems to me that arbitrary rewards and punishments are far more likely than any specific one, especially “reasonable” ones, unless they were generated by some unspecified “reasonable process”. If they exist at all then they are the fluctuations of the Dust, which would assign (for our purposes anyway) random incentives.

      • Stephen Diamond

         Again, this misses the thrust of Robert Wiblin’s argument. If you think arbitrary rewards are more likely, you’re not going so far as to conclude that they are certain. Whatever minimal uncertainty remains about your conclusion being true, it will justify devoting your life to the choice of an afterlife if there is the *slightest* degree of domination of one likelihood by another.

        My response, if I can repeat it, is that the paradox (because that’s what it really is) occurs because of (impermissibly) assigning a probability to the truth of the mathematical theorems we apply to infinite utility. (This occurs where Robert asks whether you’re completely sure any part of infinity is equally large as any other proportion.)

        The Bayesian model requires you to assign a probability of 1 (insofar as this language is even appropriate) to analytic truths that are part of the very conceptual model Bayesian inference uses. You can no more say that there’s a .000001 likelihood that Cantor was wrong (when you rely on the mathematical properties of infinities) than you can say there’s a similarly small probability Bayes was wrong. You are bound to assume the truth of the analytic basics of your model you are using for your reasoning. (But since, in fact, we *can* be wrong about propositions we take to be analytic, arithmetic could even be inconsistent, so there’s something wrong with bootstrapping from the Bayesian model in the first place.)

  • Daniel

    > She even knows a philosophy student who is attempting to convert to Christianity on the basis of the wager!

    It doesn’t count if it is Will Newsome. :)

  • http://profiles.google.com/rjvg50 Kirk Holden

    At issue is the concrete embodiment of something casually called ‘belief’. Would a comatose patient who ‘believed’ in Thor or Zeus just prior to entering the coma count as a believer? They professed belief but do they really (really) believe what they once believed? Pragmatically this is about behavior — people see what I do and they hear what I say and in this way I demonstrate my belief (or habits conditioned by belief or a narrative informed by belief…). Any demonstration of belief keeps me honest — more honest than a trickster Jesuit! — and potentially converts others to my belief, expressed through habit. And I am so obviously righteous and so profoundly worth emulation, I can even have my atheist fingers crossed and still be saved (or redeemed or exchanged for a believer of greater value) since I strengthen ‘belief’ in others.

  • René Finkler

    when I look at who their proponents are, hell looks way more attractive than heaven, and – according to them – I have to do nothing to get there.  no contest.

  • Sam Lichtenstein

     Cantor had interesting thoughts concerning the relationship between the Absolute Infinite (the collection of all [finite and transfinite] ordinal numbers) and God: http://en.wikipedia.org/wiki/Absolutely_infinite

    But to be honest, I think it is stupid to consider infinite values of ordinal or cardinal utility functions.

     First of all, there are issues in the philosophy of mathematics (see the Wikipedia page on “Finitism” or “Constructivism”) that should give one pause. I think it’s probably uncontroversial (though I could be wrong) that unlike other parts of mathematics, transfinite numbers have not proved unreasonably effective in the natural sciences (in the Wignerian sense). Personally, I feel this gives some weak support for a finitist point of view that the Cantorian hierarchy of infinities does not “exist” as a collection of mathematical objects. More significantly, it at least should  make one hesitate about *using* transfinite cardinals in a mathematical *model* just as one uses finite cardinals.

    Of course, just because transfinite numbers never *have* been use to model anything doesn’t mean they never *should* be so used. Perhaps it is precisely the domain of ethics for which they are suited. But if you’re going to do so, I think you must recognize what a departure from normal science has taken place, from a mathematical point of view. And I reserve the right to think you’re full of crap.

    • http://disputedissues.blogspot.com/ Stephen R. Diamond

       What about when the universe is said (with experimental support) to be infinite in extent. What can the constructivists say to that?

      • http://www.uweb.ucsb.edu/~criedel/ Jess Riedel

        There’s no experimental support for an infinite universe besides the observation that the universe is at least as large as the observable universe.  There are plenty of other possibilities compatible with this observation, like a periodic universe.  Therefore, your beliefs are just determined by your prior probability distribution on such possibilities.

      • Stephen Diamond

         Does a periodic universe avoid actual infinity? Seems it would substitute a infinitude of periods for an infinity of extent–still giving infinity a claim to natural existence.

    • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

      But to be honest, I think it is stupid to consider infinite values of ordinal or cardinal utility functions.

      Well, surely there must be some way to model the promise of eternal life. The arguments against the possibility of actual infinities don’t seem relevant: eternal life isn’t ever completed; we can really only talk of an arbitrarily long life, in any event. The infinity involved is only potential.

  • https://textcelerator.com/ James Babcock

    See, as a programmer, the whole Pascal’s Wager problem and its arguments keep screaming “type error!” every few sentences. Infinity is something that you use in limits and which is approached by well-defined sequences. You can’t just drop it into a real number’s slot.

    You can use some philosophical sleight of hand to try to argue that I ought to lexicographically prefer any probability of heaven over anything else. However, what I actually do prefer is a factual question, not a philosophical one, and as a simple factual matter, I do not have that preference, and my preferences are not structured that way, and I don’t think any human’s preferences are structured that way, either.

  • Tony

    Hm, well, the chief difference between believers and atheists is the number of gods they reject. A Christian rejects a thousand different gods, while an atheist rejects a thousand and one. Not all the gods would punish you with hell, but doesn’t this automatically mean you pick the god with the worst punishment? I’m not even sure which one that would be.

  • Mitchell Porter

    Did I ever mention that I am a largest-large-cardinal utility monster? Amazing but true. (Cantor came to me in a dream and told me this.) Clearly it’s important that transfinite ethicists like Amanda take this fact into account, when one day they reach the point of making practical recommendations to the world.

  • Trevor Blake

    There are two and only two options here.  You can worship all possible gods in all possible ways while simultaneously and necessarily avoiding the contradictions that are inherent in this course of action.  In this way, you will win Pascal’s wager.

    Or you can rig the game by joining the Church of the SubGenius, which guarantees eternal salvation or triple your money back.

    • jhertzli

      “To Whom it may concern: Thy Will, not mine.” — Jame Blish

      Also see The Universal Prayer by Alexander Pope and the Agnostic’s Prayer in Creatures of Light and Darkness by Roger Zelazny.

  • http://www.facebook.com/yudkowsky Eliezer Yudkowsky

    The god of my religion gives you Flimple Utility if you obey the twin commandments “Thou Shalt Be An Atheist” and “Thou Shalt Send $5 To Eliezer Yudkowsky Via Paypal”.  What is “Flimple”, you ask?  It’s a special number defined to have two properties; first, any probability times Flimple is greater than any non-Flimple quantity, including a probability of aleph-two utility, aleph-omega utility, and so on.  Second, if you hear about two or more Flimples, the first one you hear about is lexicographically greater than any others.  Ergo, you should be an atheist and you should send me $5.

    This is why I get nervous around postulating “quantities” with special programmatic behaviors inside utility functions.  What is infinity, but a magic token inside the system defined by its claim to have the special behavior of being larger than any other token regardless of what probability it is multiplied by?

    • V V

       But does a Flimple shave himself? XD

    • Doug S.

      But does such a number as “Flimple” actually exist, or is it a self-contradictory concept, like “the set of all sets that are not members of themselves” or “The smallest positive integer not definable in under eleven words”?

  • J O

    I would argue that there is a higher probability that I can give infinite rewards or punishments than one of the beings in the probability space for hypothetical and unsupported sentient beings.  A vastly higher chance actually (still overall exceedingly low of course).  At least that’s the conclusion I come to if I assume that granting infinite rewards/punishments requires existing, and also consider the strong evidence for me existing but not for various conceivable permutation of God X.

  • David Mathers

    It’s not actually clear that you need any infinitely large values to get into trouble here, so if there is something wrong with the reasoning it may not be anything to do with the fact that it involves claims about infinite utility. See this (hilariously funny) paper by Nick Bostrom: 
    http://www.nickbostrom.com/papers/pascal.pdf

    • http://www.mccaughan.org.uk/g/ gjm

      For this finite-Pascal-mugging argument to work, the victim needs to assign a positive probability to the proposition “No matter how outlandish the promise the mugger makes, he will keep it”. That seems to me perilously close to an assumption that the mugger has infinite powers.

      If the victim believes, instead, something like Pr(mugger can deliver at least X utils) = 2^-X, then their expected gain from handing over the wallet may be extremely small (in particular, less than the value of the money in the wallet) even though, for any finite amount of utility, they assign positive probability that the mugger can deliver it. (The probabilities don’t actually need to drop anywhere near that fast; all that matters is that the integral of Pr(can deliver at least X utils) is finite. So, e.g., 1/[X (log X)^2] would do.)

      So no, you don’t exactly need infinitely large values, but you do need not-especially-plausible beliefs about what happens for arbitrarily large values.

    • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

       Holden Kardofsky presented the best deconstruction of the finite version, Pascal’s Mugging, at http://lesswrong.com/lw/745/why_we_cant_take_expected_value_estimates/

      A more concise but less thorough treatment is at http://unenumerated.blogspot.co.uk/2012/07/pascals-scams.html

      Pascal’s mugging is methodologically significant, but only Pascal’s wager (involving infinities) is philosophically significant.

      • Carl Shulman

        “Holden Kardofsky presented the best deconstruction of the finite version, Pascal’s Mugging, athttp://lesswrong.com/lw/745/wh…”
        That’s just sleight of hand, the anti-Pascal’s Mugging result only follows with his assumption of a prior that gives infinite odds against gains being realized as the gain goes to infinity. That prior does all of the work. 

        On the other hand, there are logically possible worlds with simple laws of physics, e.g ones with the physics of the Game of Life, where Turing-complete structures can make computations with arbitrarily many steps and store arbitrarily large memories. Or worlds where hypercomputation is possible, or transmitting information to baby universes. Or if one is a one-boxer on Newcomb’s problem, worlds with infinitely many physically similar copies of Earth, and people using like cognitive algorithms.

        Holden in that post purports to adopt a prior that assigns zero probability to these possibilities, despite their short simple descriptions. I.e. it replaces or supplements Occam’s razor with immovable a priori certainty (in the face of any evidence that a human could receive in a lifetime, even living in a world where physicists appear to develop methods of unbounded computation and these appear to come into wide use) that very good outcomes can’t happen.

        All the talk of Bayesian adjustments obscures this, but that amounts to saying that if you assign something infinitesimal probability (and thus EV) no realistic amount of evidence could convince you it was real.

        That post doesn’t explain why one shouldn’t act on prospects of vast outcomes with weak evidence, it says that one should assume they are a priori impossible, even if all the experimental evidence, all the scientists, and all the theory say that they are practicable.

  • Mark M

    In my Theory of Religion class (God 101, we called it), our professor told us there were some conditions for Pascal’s Wager.  The first condition is that you have not already decided.  If you already believe or disbelieve in God you don’t need to wager – you’ve already made your choice.  The second condition is that whatever evidence you are weighing or rationale you are using does not weight the outcome in either direction.  You believe existence and non-existence are equally likely.  If your reasoning points you in one direction, Pascal’s Wager won’t make you believe something else.  The third condition is you urgently feel the need to make the decision.

    In other words, Pascal’s Wager can tip scales that are otherwise evenly balanced and you feel the decision is too urgent too wait.

    Although Pascal’s Wager can help make the decision, it can’t sustain the decision.  Once the decision is made, however, cognitive bias kicks in to reinforce it.

    These conditions virtually guarantee that Overcoming Bias readers are unable to use Pascal’s Wager to alter their belief in God.  Utility calculations don’t matter because simply knowing that believing in God maximizes expected utility is not enough to overcome your reasoning that God doesn’t exist.  (If you already believe in God, then you also have no need for Pascal’s Wager).

    Does anyone know of anyone who has successfully used Pascal’s Wager?  I know the theory of how it’s supposed to work, but I’ve never heard of it actually working.  (Interesting side note:  My God 101 teacher was an atheist).

  • Sigivald

    She even knows a philosophy student who is attempting to convert to Christianity on the basis of the wager!

    A sucker’s born every minute, even in Philosophy majors.

    (Aren’t there also other issues than merely “being wrong”?

    Given that “belief in a religion” for these purposes is supposed to inform one’s moral choices:

    A believer in a religion that turns out to be false is presumably also making falsely-based moral choices and thus likely doing wrong (or at least not maximizing right).

    And conversely, the “correct atheist” would presumably be avoiding those pitfalls.

    Likewise, if the religion is true, falsely denying it would lead to the same mistake in moral outcome, and believing in it would lead to correct action.

    The “wager” depends on casting those mistakes or correct actions as of null value, preferring an un-testable and un-knowable asserted infinite personal payoff to “cheat” the logic*.)

    (* One wonders what the God of Moses, the one Pascal was pimping for, would think of someone who did right only to get to heaven, rather than for love of The Good. One suspects that he would not be particularly pleased with that actor.)

    (I also think the very idea of “infinite utility” is questionable, and leads to extra-questionable outcomes – and that we can make up an infinite set of notional and equally un-factually-supported cases such that choosing to not believe in god saves you from AtheistHell and there’s no reward for believing.

    It’s just as plausible as makign anything else up, and merely asserting infinite utility or disutility without evidence – and there is always reason to doubt such claims! – is a piss-poor reason to choose any alternative.)

  • Douglas Knight

    [higher cardinality utilities] is an issue that has hardly been researched by philosophers
    The best thing I’ve heard about philosophers this year.

  • Sniffnoy

    > Extending this point, Amanda notes the domination of ‘higher cardinality’ infinities over lower cardinalities.

    This is essentially completely irrelevant. There is not one unique system of infinite numbers for all situations. Cardinals are for describing sizes of sets. They are not suited for such things as measure and integration, which is what you’re doing with utility. For those, we really do use a blank uniform “infinity”, because it’s what’s appropriate.

    Now, OK, there might be some way of doing cardinal-valued integration that I just don’t know of. But the thing is, even if you have some notion of utility with different levels of infinity, there is no a priori reason to expect these different levels of infinity to be identified with the cardinals! There are many ways of having different levels of infinity, so specifically singling out cardinals without some particular reason to is unjustified.

    (For more on this, see my LessWrong discussion post: http://lesswrong.com/lw/3g7/draftwiki_infinities_and_measuring_infinite_sets/ )

    Also, let’s not forget that a rational agent obeying the conditions of Savage’s Theorem actually has a bounded utility function. Not to mention that even if we allow unbounded utility functions, allowing an event to have infinite utility is problematic because it means that waving your hand in the air in the hopes that this somehow spontaneously causes the event to occur is better than doing anything unrelated to this event. That is to say, unlesss the inifnite-utility event actually has probability exactly 0, all the events with finite utilities are failing to actually influence anything the agent does, so why are we bothering with them?  We would probably be better of modeling this agent instead as having a 0-1 utility function instead of talking about infinity.  (And at least that way we’re avoid the ridiculousness of a 1/2 probability of the event having the same utility as a 1 probability of the event.)

  • Charles R. Twardy

    Alan Hajek has explored this area, and rejects solutions that just throw out infinite utilities.  Why, he asks, should decision theory fail precisely when the agents involved are ideally rational?  See his article on the Pasadena Game, and related pieces.  http://philpapers.org/s/Alan%20Hajek

  • V V

    Firstly, that a*∞ = b*∞ = ∞ for any a > 0 and b > 0.

    Sorry if I sound rude, but it seems that you are failing Math 101.
    Infinity is not a real number, you can’t multiply it by a real number.

    You need to consider either diverging limits, which are each different from another, or equivalently the infinite hyperreals of non-standard analysis, which are also each different from another.

    If you have two functions f and g, such that
    lim_t->∞ f(t) = ∞ and lim_t->∞ g(t) = ∞

    the limit of their weighted differencelim_t->∞ (a*f(t) – b*g(t))
    may be anything from -∞ to a negative real to zero to a positive real to +∞.

    So it doesn’t make sense to say that an outcome of a decision theory problem has “infinite” utility without qualification.

    Which is of course irrelevant to the issue at hand, since Pascal’s Wager doesn’t work even with finite utilities.

    Extending this point, Amanda notes the domination of ‘higher cardinality’ infinities
    over lower cardinalities. The slightest probability of an
    infinity-aleph-two utility would always trump a certain
    infinity-aleph-one. I am not sure what to do about that.

    Nothing. It’s just pseudo-mathematical nonsense.

    • Martin-2

      Are you calling the concept of cardinality pseudo-mathematical or the application of it here pseudo-mathematical? Cardinality is a well defined concept but I agree with the second statement.

  • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

    Yudkowsky’s initial comment provides the best starting point for seeing how Robert Wiblins’s argument isn’t being grasped. Yudkowsky writes:

    “The god of my religion gives you Flimple Utility if you obey the twin
    commandments “Thou Shalt Be An Atheist” and “Thou Shalt Send $5 To
    Eliezer Yudkowsky Via Paypal”.  What is “Flimple”, you ask?  It’s a
    special number defined to have two properties; first, any probability
    times Flimple is greater than any non-Flimple quantity, including a
    probability of aleph-two utility, aleph-omega utility, and so on.
     Second, if you hear about two or more Flimples, the first one you hear
    about is lexicographically greater than any others.  Ergo, you should be
    an atheist and you should send me $5.”

    Following Wiblin, the above is a possibility rationalists should indeed consider. The point (as I parse Wiblin) is that indeed all such claims like Yudkowsky’s should be considered, but we must also consider the probability of their being true. We must assess their posterior probability based on the total evidence we have, even if that evidence is extraordinarily weak. We must do this because if our best estimate is that if one alternative possibility regarding the road to afterlife  ever so slightly dominates in likelihood, we should dedicate our entire lives to doing that which makes the infinite reward most likely–and nothing but that, unless it coincides. (Thus Robin’s answer that we ought to follow the collective moral wisdom of mankind is a bit vapid: Wiblin purports to have demonstrated that striving for rewards in a logically possible afterlife is infinitely more important than any of our other pursuits; it doesn’t lead to a milquetoast moral least common denominatorism.)

    Understanding Wiblin point is prerequisite to answering it, and nobody seems to have understood it except perhaps some of those who oppose entering infinities into decision theory. I don’t see why not. The argument that infinity isn’t a “number” rests on a particular manner of teaching calculus; calculus has been formulated to include infinite and infinitesimal quantitaties. And Cantor _did_ successfully treat the infinities as numbers!

    The puzzle demands an answer, for surely the conclusion is absurd. But to realize the absurdity isn’t to answer the puzzle. Here’s my attempt. The mistake lies in introducing into the Bayesian calculus the probability that the Bayesian calculus is inconsistent. This would require descending to a level where the calculus itself is up for consideration, but then you’d be left with assessing the probability of those assumptions being wrong. In a sense this is an argument, I think, that knowledge acquisition isn’t itself fundamentally a Bayesian proccess; it rests on more fundamental assumptions of a nonquantitative variety. But if you include infinities, the math of infinities must be taken as given. You can’t legitimately assign a probability to our assumptions about infinity being false without bringing the framework itself into question.

    • V V

       

      calculus has been formulated to include infinite and infinitesimal quantitaties.

      You must be refering to non-standard analysis. You can indeed apply decision theory to problems where the utility of the outcomes is hyperreal rather than real.

      The point is that there are inifinitely many infinite hyperreals, each different from another.

      So the statement: “a*∞ = b*∞ = ∞ for any a > 0 and b > 0″
      is ill-defined because there are infinitely many possible ∞. If you a choice of a ∞, then the statement is wrong unless a = b = 1.

      More generally, a decision problem can be defined such that the utility of outcomes are elements in any set equipped with a total order relation and a multiplication by probability operation.

      • V V

         ”If you a choice …”  -> “If you fix a choice …”

  • Michael Wengler

    I crossed this river when I was 10 years old and being educated to be a Roman Catholic.  I was told that my non-belief would damn me.  At a certain point I realized that in terms of deciding whether it was true that god existed, any “fact” that my non-belief would damn me was irrelevant.  

    So the next question I guess is whether you are committed to believing only things you think are likely to be true, whether that is a definition of belief or not.  I suppose at 10 I decided that my definition of belief was “thinking something was probably true” and I stopped “believing” in god.  Poof!  At that point my anxiety about going to hell if I didn’t believe largely dissipated.  

    Another component was I had been told the god I was supposed to believe in was omniscient omnibenevolent and loved me.  It was not really plausible to me that a god with those sterling properties would base my salvation on my either doing the impossible (believing something I thought wasn’t true) or base my salvation on my being willing to lie to myself or base my salvation on my being too stupid to notice these problems.  

    Did Pascal convert before he died?  

  • Me

    Funny, she, as Pascal did, left out a very important variable: truth or rather distance and approximation towards or away.

    Factor that in and Pascal completely falls apart.

  • http://www.facebook.com/eerlikh Mira Kuru

    Can someone show me the passage in the Bible about hell? Because “eternal suffering” was never a consequence according to my religious Uncle who reads the bible all the time. 

  • lightreadingguide

    Pascals wager seems to me to appear to be written in something like aspergerese.  Translated into non-aspergerese it might read as follows.
     I did not invent love.  However, i consider myself to be a creature that loves those who are close to me – spouse, friends, children, or my good little big bad dog.  If I did not love I would not care about the invention of love.  Since I do love, I happen to care whether love was created or is stochastic. I might as well consider love to be created, since if it is stochastic I am unable to prove where my love comes from  – not now, not ever – and the other lovers I have known desire eternal security and I guess they know as much or more about love - my favorite subject of philosophizing – than I do. 
    You could counter that this translation from the aspergerese does not remotely address the problem of Pascals wager as reducible to an obvious signal directed outwards towards an unknown creator claiming that if the creator exists, the creator, too, is loved.
    It also does not remotely address the question of whether a  creator  would or would not desire something more than an optimizing/  wagering heart. 
     However, Pascals second most famous quote is that the heart has its reasons that the reason cannot comprehend …which I translate as every person, including whiz kids like Pascal,  has too low of an iq to figure out the really important things in life 

  • Richardsilliker

    infinite utility

    Change the above to “continuous utility” and you can return to worrying about ordinary things.

  • MPS

    I think the resolution to this issue is that we don’t really want to act to maximize the expectation value of the utility function.  That, and being religious carries risks.

    Consider for example the following challenge:  I give you a die and if you roll 1-5 you get $10M, if you roll 6 I kill your best friend (or your wife, if that be the case).  For sufficiently poor people, I think accepting the challenge maximizes expectation value of the utility function.  After all, $10M give you quite a nice lifestyle, while losing your best friend, while quite a hardship, is something a lot of people go through every day and more or less get by.  If you don’t like this gruesome example well I imagine if you take more than the 30 seconds I’ve given it you can think of something better to convey the same point.

    The point is, when we are at risk of losing something, we don’t always want to maximize the expectation value of the utility function.  We sometimes like to take “conservative” courses of action which don’t offer as high of possible pay out but at the same time protect us from huge losses.

    Now you might say that this really is maximizing the expectation value of the utility function, and I’ve just chosen the wrong method of computing expectation values.  Fine.  You tell me the right method, and then we’ll discuss Pascal’s wager again.

    Because the other point is that being religious carries risks.  When you go to church, you miss out on anything else you could be doing in that time.  You might get in a deadly car accident on the way there.  When you forgo sex before you are married, you miss out on a pleasure that wanes with time.  And of course your religious beliefs affect your social standing.

    And these are the real reasons we are atheist.  If all I had to do is say to myself “I believe” for a chance at eternal afterlife; sure, I’d do it.  And maybe when I’m on my death bed I’ll say a prayer.  That’s because these things don’t cost me anything.  But actually living a religious life now, it does cost me, and I don’t do it because I don’t consider it worth giving up the “guaranteed” utility that’s taken away by being religious for the “chance” at the eternal afterlife.

  • JVA

    If you venture outside of real numbers into infinities, why stop there? Why not measure utility in irrational numbers? Why not as n-dimensional matrices? The mathematical tools are already there. 

    • V V

       Nitpick: irrational numbers are real numbers.

  • Steve Greene

    Which God? Which infinite reward? Which infinite torture? Pascal’s Wager is incoherent.

  • TruePath

    Three comments:

    First, infinite values in pascal’s wager have been studied at length in philosophy (tho IMO no compelling new arguments).

    Second, infinite values do nothing to resurect the argument in Pascal’s wager.  At best they allow one to conclude that either belief or non-belief in god is the utility maximizing strategy (no ties) but without the argument that the utility given to belief is larger you lose all the features that made the argument interesting.

    Third, infinite utilities really aren’t utilities in the standard sense.  Utilities (in the Von Neumann and Morgenstern sense) must be chosen from something very much like a real field (field with ordering) though it might be possible to do in something like a division ring.  Infinite cardinalities simply don’t form such a structure (though there are real fields with ‘infinitary’ values…take a non-standard model of the integers and extend it in standard way to a field).  Worse, I believe you have to draw your probabilities from the interval [0,1] in the same real field as you draw your utilities to maintain the nice features of utility (probabilistic mixtures of outcomes take on all values between the outcomes…i.e. probability is at least as discriminating as utility).  This means  you can always pair an arbitrarily large utility with a sufficiently small probability so it is outweighed by an arbitrarily small utility with high probability.

  • Pingback: Philosophy of Utility Maximisation | Eventually Almost Everywhere