You are probably familiar with Pascal’s Wager – the idea that it is worth believing in God in order to increase your probability of going to heaven and lower your probability of going to hell. More generally, for an expected utility maximiser it will always be worth doing something that offers any probability of an infinite utility, no matter how low that probability.
Hey, that's very interesting, to me! :)You talk about a mutual society based on recongnition of each other's utility/humanity and equally exchanging?
First, infinite values in pascal's wager have been studied at length in philosophy (tho IMO no compelling new arguments).
Second, infinite values do nothing to resurect the argument in Pascal's wager. At best they allow one to conclude that either belief or non-belief in god is the utility maximizing strategy (no ties) but without the argument that the utility given to belief is larger you lose all the features that made the argument interesting.
Third, infinite utilities really aren't utilities in the standard sense. Utilities (in the Von Neumann and Morgenstern sense) must be chosen from something very much like a real field (field with ordering) though it might be possible to do in something like a division ring. Infinite cardinalities simply don't form such a structure (though there are real fields with 'infinitary' values...take a non-standard model of the integers and extend it in standard way to a field). Worse, I believe you have to draw your probabilities from the interval [0,1] in the same real field as you draw your utilities to maintain the nice features of utility (probabilistic mixtures of outcomes take on all values between the outcomes...i.e. probability is at least as discriminating as utility). This means you can always pair an arbitrarily large utility with a sufficiently small probability so it is outweighed by an arbitrarily small utility with high probability.
But to be honest, I think it is stupid to consider infinite values of ordinal or cardinal utility functions.
Well, surely there must be some way to model the promise of eternal life. The arguments against the possibility of actual infinities don't seem relevant: eternal life isn't ever completed; we can really only talk of an arbitrarily long life, in any event. The infinity involved is only potential.
Are you calling the concept of cardinality pseudo-mathematical or the application of it here pseudo-mathematical? Cardinality is a well defined concept but I agree with the second statement.
Which God? Which infinite reward? Which infinite torture? Pascal's Wager is incoherent.
Nitpick: irrational numbers are real numbers.
If you venture outside of real numbers into infinities, why stop there? Why not measure utility in irrational numbers? Why not as n-dimensional matrices? The mathematical tools are already there.
I am a Christian and sometimes I question the existence of God. Then I see the diameter of the moon and the sun when viewed from earth, it amazes me that they are both the same in size from our perception. In my opinion the total Solar eclipse proves that God is real.
I think the resolution to this issue is that we don't really want to act to maximize the expectation value of the utility function. That, and being religious carries risks.
Consider for example the following challenge: I give you a die and if you roll 1-5 you get $10M, if you roll 6 I kill your best friend (or your wife, if that be the case). For sufficiently poor people, I think accepting the challenge maximizes expectation value of the utility function. After all, $10M give you quite a nice lifestyle, while losing your best friend, while quite a hardship, is something a lot of people go through every day and more or less get by. If you don't like this gruesome example well I imagine if you take more than the 30 seconds I've given it you can think of something better to convey the same point.
The point is, when we are at risk of losing something, we don't always want to maximize the expectation value of the utility function. We sometimes like to take "conservative" courses of action which don't offer as high of possible pay out but at the same time protect us from huge losses.
Now you might say that this really is maximizing the expectation value of the utility function, and I've just chosen the wrong method of computing expectation values. Fine. You tell me the right method, and then we'll discuss Pascal's wager again.
Because the other point is that being religious carries risks. When you go to church, you miss out on anything else you could be doing in that time. You might get in a deadly car accident on the way there. When you forgo sex before you are married, you miss out on a pleasure that wanes with time. And of course your religious beliefs affect your social standing.
And these are the real reasons we are atheist. If all I had to do is say to myself "I believe" for a chance at eternal afterlife; sure, I'd do it. And maybe when I'm on my death bed I'll say a prayer. That's because these things don't cost me anything. But actually living a religious life now, it does cost me, and I don't do it because I don't consider it worth giving up the "guaranteed" utility that's taken away by being religious for the "chance" at the eternal afterlife.
But does such a number as "Flimple" actually exist, or is it a self-contradictory concept, like "the set of all sets that are not members of themselves" or "The smallest positive integer not definable in under eleven words"?
Change the above to "continuous utility" and you can return to worrying about ordinary things.
"Holden Kardofsky presented the best deconstruction of the finite version, Pascal's Mugging, athttp://lesswrong.com/lw/7......"That's just sleight of hand, the anti-Pascal's Mugging result only follows with his assumption of a prior that gives infinite odds against gains being realized as the gain goes to infinity. That prior does all of the work.
On the other hand, there are logically possible worlds with simple laws of physics, e.g ones with the physics of the Game of Life, where Turing-complete structures can make computations with arbitrarily many steps and store arbitrarily large memories. Or worlds where hypercomputation is possible, or transmitting information to baby universes. Or if one is a one-boxer on Newcomb's problem, worlds with infinitely many physically similar copies of Earth, and people using like cognitive algorithms.
Holden in that post purports to adopt a prior that assigns zero probability to these possibilities, despite their short simple descriptions. I.e. it replaces or supplements Occam's razor with immovable a priori certainty (in the face of any evidence that a human could receive in a lifetime, even living in a world where physicists appear to develop methods of unbounded computation and these appear to come into wide use) that very good outcomes can't happen.
All the talk of Bayesian adjustments obscures this, but that amounts to saying that if you assign something infinitesimal probability (and thus EV) no realistic amount of evidence could convince you it was real.
That post doesn't explain why one shouldn't act on prospects of vast outcomes with weak evidence, it says that one should assume they are a priori impossible, even if all the experimental evidence, all the scientists, and all the theory say that they are practicable.
Pascals wager seems to me to appear to be written in something like aspergerese. Translated into non-aspergerese it might read as follows. I did not invent love. However, i consider myself to be a creature that loves those who are close to me - spouse, friends, children, or my good little big bad dog. If I did not love I would not care about the invention of love. Since I do love, I happen to care whether love was created or is stochastic. I might as well consider love to be created, since if it is stochastic I am unable to prove where my love comes from - not now, not ever - and the other lovers I have known desire eternal security and I guess they know as much or more about love - my favorite subject of philosophizing - than I do. You could counter that this translation from the aspergerese does not remotely address the problem of Pascals wager as reducible to an obvious signal directed outwards towards an unknown creator claiming that if the creator exists, the creator, too, is loved. It also does not remotely address the question of whether a creator would or would not desire something more than an optimizing/ wagering heart. However, Pascals second most famous quote is that the heart has its reasons that the reason cannot comprehend ...which I translate as every person, including whiz kids like Pascal, has too low of an iq to figure out the really important things in life
Holden Kardofsky presented the best deconstruction of the finite version, Pascal's Mugging, at http://lesswrong.com/lw/745...
A more concise but less thorough treatment is at http://unenumerated.blogspo...
Pascal's mugging is methodologically significant, but only Pascal's wager (involving infinities) is philosophically significant.
Does a periodic universe avoid actual infinity? Seems it would substitute a infinitude of periods for an infinity of extent--still giving infinity a claim to natural existence.
Again, this misses the thrust of Robert Wiblin's argument. If you think arbitrary rewards are more likely, you're not going so far as to conclude that they are certain. Whatever minimal uncertainty remains about your conclusion being true, it will justify devoting your life to the choice of an afterlife if there is the *slightest* degree of domination of one likelihood by another.
My response, if I can repeat it, is that the paradox (because that's what it really is) occurs because of (impermissibly) assigning a probability to the truth of the mathematical theorems we apply to infinite utility. (This occurs where Robert asks whether you're completely sure any part of infinity is equally large as any other proportion.)
The Bayesian model requires you to assign a probability of 1 (insofar as this language is even appropriate) to analytic truths that are part of the very conceptual model Bayesian inference uses. You can no more say that there's a .000001 likelihood that Cantor was wrong (when you rely on the mathematical properties of infinities) than you can say there's a similarly small probability Bayes was wrong. You are bound to assume the truth of the analytic basics of your model you are using for your reasoning. (But since, in fact, we *can* be wrong about propositions we take to be analytic, arithmetic could even be inconsistent, so there's something wrong with bootstrapping from the Bayesian model in the first place.)