The fallacy of the one-sided bet (for example, risk, God, torture, and lottery tickets)

This entry by Eliezer struck me as an example of what I call the fallacy of the one-sided bet.  As a researcher and teacher in decision analysis, I’ve noticed that this form of argument has a lot of appeal as a source of paradoxes.  The key error is the framing of a situation as a no-lose (or no-win) scenario, formulating the problem in such a way so that tradeoffs are not apparent.  Some examples:

– How much money would you accept in exchange for a 1-in-a-billion chance of immediate death? Students commonly say they wouldn’t take this wager for any amount of money. Then I have to explain that they will do things such as cross the street to save $1 on some purchase, there’s some chance they’ll get run over when crossing the street, etc. (See Section 6 of this paper; it’s also in our Teaching Statistics book.)

– Goals of bringing the levels of various pollutants down to zero. With plutonium, I’m with ya, but other things occur naturally, and at some point there’s a cost to getting them lower. And if you want to get radiation exposure down to zero, you can start by not flying and not living in Denver.

– Pascal’s wager: that’s the argument that you might as well believe in God because if he (she?) exists, it’s an infinite benefit, and if there is no god, it’s no loss. (This ignores possibilities such as: God exists but despises believers, and will send everyone but atheists to hell. I’m not saying that this highly likely, just that, once you accept the premise, there are costs to both sides of the bet.) See also this from Alex Tabarrok and this from Lars Osterdal.

– Torture and the ticking time bomb: the argument that it’s morally defensible (maybe even imperative) to torture a prisoner if this will yield even a small probability of finding where the ticking (H)-bomb is that will obliterate a large city. Again, this ignores the other side of the decision tree: the probability that, by torturing someone, you will motivate someone else to blow up your city.

– Anything having to do with opportunity cost.

– The argument for buying a lottery ticket: $1 won’t affect my lifestyle at all, but even a small chance of $1 million–that will make a difference! Two fallacies here. First, most lottery buyers will get more than 1 ticket, so realistically you might be talking hundreds of dollars a year, which indeed could affect your standard of living. Second, there actually is a small chance that the $1 can change your life–for example, that might be the extra dollar you need to buy a nice suit that gets you a good job, or whatever.

There are probably other examples of this sort of argument. The key aspect of the fallacy is not that people are (necessarily) making bad choices, but that they only see half of the problem and thus don’t realize there are tradeoffs at all.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • anonymous

    – Torture and the ticking time bomb: the argument that it’s morally defensible (maybe even imperative) to torture a prisoner if this will yield even a small probability of finding where the ticking (H)-bomb is that will obliterate a large city. Again, this ignores the other side of the decision tree: the probability that, by torturing someone, you will motivate someone else to blow up your city.

    No: the assumption in such scenarios is that the ticking bomb already exists, and that the only way to have a chance at preventing it from going off is by using torture. In such a situation, future would-be bombers are irrelevant (only if you save the city now will it even matter what future terrorists do or don’t do).

    The thought experiment is not about whether a systematic policy of torture is morally defensible; it is about whether the “no torture” rule is absolute, or admits exceptions.

  • douglas

    Excellent point. I believe a related situation often comes up in economic discussions. The opportunity costs are often overlooked. So, for example, when debating the military budget the question, “How else could this money be spent?” is seldom asked.

  • Shakespeare’s Fool

    Isn’t one problem with the first comment that it ignores the possibility that there is no ticking time bomb. If you havn’t found it how do you really know it exists?

    John

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I have to deny the charge of ignoring tradeoffs. The problem with playing around with the really huge quantities I described is not that the bet is one-sided but that it may not balance out exactly for incredibly huge quantities. Robin did a good job of resolving my challenge, but the answer was not that the bet was multisided (this was already acknowledged), but that anthropic arguments really did drive down the prior probability to infinitesimal levels.

  • http://entitledtoanopinion.wordpress.com/ TGGP

    I had never heard the argument that torturing will cause nuking. Doesn’t sound plausible to me. Nukes aren’t easy to come by, so I would expect their use to be conditioned on more far-reaching policies. If I was torturing the guy in an attempt to stop the ticking nuke, after that situation had been resolved and he’s no longer needed, I would kill him afterward and then not release the information of what happened or make up some story of how he died (suicide is a good one). The United States has killed enough people that one disappearance won’t affect things much. The real problem with the torture policy is that you just know it’s going to be abused. We ultimately want agents that will flout the law on our behalf in such rare circumstances but are willing to accept the consequences and go to jail for their actions.

    How is plutonium different when it comes to “at some point there’s a cost to getting them lower”?

    A good argument about the lottery ticket is that you should have already spent that dollar on something that would give you more satisfaction than the lottery ticket. Always think on the margin and ask the question “Is the best use for this dollar that ticket?”.

    douglas, why assume that the money must be spent? I’d be in favor of cutting military spending even if it just got turned into a tax cut and we all hid that money under our mattresses.

  • Gray Area

    Aa a friend of mine points out, the common thread here is that using expectations is only reasonable when you play ‘the games’ enough times to let the law of averages do its work. In other words, a reasonable definition of utility will have it converge to the expectation as the number of ‘plays’ approaches infinity, and at the same time have it converge to the probability of a ‘win’ as the number of ‘plays’ approaches zero.

    This seems to agree with everyday experience, where people trade off small gains for small probability of a very large cost (e.g. dying due to taking an airplane/crossing the street/driving on freeways/etc).

  • http://lubabnomore.blogspot.com Lubab No More

    > $1 won’t affect my lifestyle at all, but even a small chance of $1 million–that will make a difference!

    I like the idea that when you buy a $1 lotto ticket what you actually get is the thrill of dreaming about what you would do with the money. When you don’t have a ticket you know there is no chance you will be a millionaire by the end of the week. With the ticket in hand people can feel like there IS a chance, no matter how unlikely.

  • http://videogameworkout.com Glen Raphael

    I liked Jim Henley’s commentary on the ticking time bomb scenario both in Reason and on his blog. Among many downsides being ignored are the possibility that torture will produce a false confession that leads the good guys on a wild goose chase, the fact that you really can’t know the subject has the info you want – he might be innocent or he might not know anything useful even if guilty – and the possibility that torture will *prevent* you from learning what you could have learned through friendlier and generally more effective methods.

    (Being “nice” while interrogating can lead to a change of heart by the prisoner, rarely causes “accidental” deaths when you go too far with it, and rarely prompts prisoners into suicide attempts or hunger strikes.)

  • douglas

    TGGP- I too would like to see the military budget cut regardless.
    Thanks for website info.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Lubab, perhaps the most pernicious aspect of the lottery is that fantasizing about it is a waste of hope. And between zero chance of wealth, and epsilon chance, there is only order-of-epsilon difference.

  • Constant

    Among many downsides being ignored

    But are they being ignored? Or taken into account but with a smaller weight than opponents of torture might give them?

  • Constant

    That people buy lottery tickets has a welcome aspect as well. We’re familiar with the criticism that people overestimate improbable events. But the flip side of this is that they are not underestimating improbable events. There is an unfortunate tendency sometimes to “round down” small probabilities to zero, and it’s nice that, at least, people aren’t doing that here.

  • http://profile.typekey.com/andrewgelman/ Andrew

    Anonymous,

    As a statistician (or maybe as a political scientist) I don’t see a sharp distinction between a “systematic policy” and the allowance of “exceptions.” ONce you say as a general point that the no-torture rule admits exceptions, then it moves the problem to determining where these exceptional cases are, which implies a policy of how to make such decisions. At that point the key question is whether the policy is “systematic” or “unsystematic,” but either way you’re talking about something that might be done more than once. As John (comment #3 above) asks, what information are you going to use to determine whether your one-time-only-exception rule for torture is operative in any particular case? Realistically, you’ll have to use some partial information, which may be wrong. Also you have to consider the possibility that, as Glen points out, that torture might make you less likely to find that bomb, even conditional on it existing. My point is that, even in the thought experiment, there are tradeoffs that are not at all acknowledged in the usual formulationl.

    Eliezer,

    Point taken. I wrote the blog entry after your initial description, which was formulated as a one-sided bet (it was an argument about why inferential principles should lead you to definitely pay the guy $5). In comments, you did allow for the utility to be positive or negative.

    TGGP,

    My point is not that torturing will cause directly nuking, but that such a policy (as noted above, I don’t see how you can have a policy to do something only when necessary) can lead indirectly to these negative outcomes. I’m not trying to make a specific argument about any particular policy; I’m saying that the “ticking time bomb” argument, as usually presented, does not recognize tradeoffs.

    Also, I agree that the $1 from the lottery ticket can make your life better write now (e.g., buy you a Coca-Cola). My argument in the blog entry was an attempt to fight the lottery-ticket-buying impulse on its own terms.

    Constant: It may be that the downsides are being taken into account by policymakers. But I haven’t seen that in the presentations of the “ticking time bomb” argument. Once you recognize the possibility of tradeoffs, this pushes you toward a more quantitative assessment of said tradeoffs.

    Also, it is interesting to me that, although lottery ticket buyers act as if they are overestimating small probabilities, they are in fact sensitive to changes in these probabilities. For example, I read that when the lottery somewhere was changed from “pick 6 out of 42” to “pick 6 out of 48” (or something like that), many people stopped playing. They want the tiny, tiny probability of getting rich, but at the same time they don’t want to get ripped off.

  • douglas

    I know some people who occasionally buy lottery tickets. It seems that they get a lot of fun out of it, even when they lose. The idea of winning is exciting, and if it is true that the mental picture creates similar body responses as the actual event (brain science at its best), then I wonder if the release of the “I am a winner” chemicals is good for the health.
    Anyone done such a study?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Er, sorry to insist on this, but my original post did acknowledge tradeoffs, not just the comments section:

    The original version of Pascal’s Wager is easily dealt with by the gigantic multiplicity of possible gods, an Allah for every Christ and a Zeus for every Allah, including the “Professor God” who places only atheists in Heaven. And since all the expected utilities here are allegedly “infinite”, it’s easy enough to argue that they cancel out…

    But suppose I built an AI which worked by some bounded analogue of Solomonoff induction…

    If the probabilities of various scenarios considered did not exactly cancel out, the AI’s action in the case of Pascal’s Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.

    This is what gives Pascal’s Mugging its sting – the expected utility doesn’t have to be positive, let alone one-sided; the problem is if the net expected utility is unreasonably huge in absolute magnitude, because then it can dominate mere “ordinary” considerations such as, say, your daughter’s life or the survival of the human species.

    Even if you only count what I wrote “above the fold” (that is, before continuation on its own page) I didn’t say anything about how to resolve Pascal’s Mugging at all – above the fold, I only introduced the dilemma, and did not discuss at all how to resolve it.

  • Michael Sullivan

    Aa a friend of mine points out, the common thread here is that using expectations is only reasonable when you play ‘the games’ enough times to let the law of averages do its work.

    Except that this is wrong.

    Imagine the following bet:

    I will generate a random number from 1 to 1000 by a process that you agree is fair. Before it is revealed, you will put up some amount of money as a bet and attempt to guess it. If you are correct, I will pay you 1100x your bet. If you are wrong, I take your bet. You can choose *any* size bet, but you can play the game only once. Do you play?

    If you have any kind of normal (i.e. continuous and always increasing) U($) function, there must be some size bet at which this game offers you positive expected utility and you should play it, even though 99.9% of the time you will simply lose your bet with no chance to play again.

    The reason you might rationally choose *not* to play is if there is a minimum bet, and at that level, the utility of a win is worth less than 1000x the disutility of a loss. So you might not choose to bet $10 to win $11000, and relatively few people would bet $10,000 to win $11,000,000 (though thousands of poker players think they are doing just that every year at the WSOP), but wouldn’t most people bet $1 to win $1100? And who in the rich world wouldn’t bet 1c to win $11, if we don’t count the time required to play the game as a cost?

    What you say here is a heuristic. It works often because very long odds with a small positive EV generally runs up against non-linear utility as the bet size rises and against time costs greater than the EV as the bet size gets smaller. For many such bets there will be no window where it makes sense to make the bet, and the longer the odds, the more likely this is the case. But under linear utility with zero transaction costs, any positive EV bet is worth taking, even if you can only take it once.

  • http://www.mccaughan.org.uk/g/ g

    Michael, I *think* you’re arguing in a circle, or else taking “one should always maximize expected utility” as an axiom.

    Imagine someone who maximizes expected utility except when faced with a choice of a sort they’re almost certainly never going to encounter again, at which point (let’s say) they maximize expected quasiutility where quasiutility is related to utility in some way that penalizes possible losses. (Let’s fill in the details: let U0 be their expected utility if the choice hadn’t arisen and U their actual utility in any given situation; then they maximize the expectation of U-h(p)|U| where p is the probability of ever again encountering a similar choice and h(p) is a smooth function that’s 0 for p > 10^-6 and -> 1 in some nice way as p -> 0.)

    Do you have grounds for declaring this person irrational other than a prior conviction that maximizing expected utility is the only rational procedure?

    I’m all for maximizing expected utility, by the way, and I think nonlinear utility plus overheads can do a fine job of justifying as much risk-aversion as actually deserves justifying. But if there’s good reason to think that no other approach can be rational, I haven’t seen it yet. (I think the usual ways of proving such theorems can’t work for choices of kinds that are almost certainly not going to be encountered again, but I wouldn’t be astonished to find that I’ve missed something.)

  • http://entitledtoanopinion.wordpress.com/ TGGP

    Adnrew, the torture hypothetical isn’t framed as having no trade-offs, the downside is torturing someone (at least for most people). It’s also possible that God exists and might strike me dead on the spot for torturing but I don’t assign it a significant probability, nor do I do so for your torture -> nuke story.

    g, I might be over my head but I think your quasi-utility is just a feature of someone’s utility function rather than a deviation from it.

  • http://theorybloc.blogspot.com/ Thomas Brownback

    Sam Harris has an interesting argument “In Defense of Torture” which can be found online at the Huffington Post: http://www.huffingtonpost.com/sam-harris/in-defense-of-torture_b_8993.html

    Harris basically argues that, due to the inevitability of collateral damage in modern warfare, it is inconsistent to find modern warfare ethically palatable and find torture universally despicable.

    At the very least, Harris provides a provocative and well written argument. No matter how you feel about the conclusion, it’s worth a read.

  • Michael Sullivan

    Imagine someone who maximizes expected utility except when faced with a choice of a sort they’re almost certainly never going to encounter again, at which point (let’s say) they maximize expected quasiutility where quasiutility is related to utility in some way that penalizes possible losses. (Let’s fill in the details: let U0 be their expected utility if the choice hadn’t arisen and U their actual utility in any given situation; then they maximize the expectation of U-h(p)|U| where p is the probability of ever again encountering a similar choice and h(p) is a smooth function that’s 0 for p > 10^-6 and -> 1 in some nice way as p -> 0.)

    It’s true that I’m assuming utility maximization is “rational”. It’s your argument here that feels circular to me. If you’re planning to maximize something other than utility, then it’s pretty clear that expected utility analysis is not the way to figure out how to do that. Why would you do that?

    If you incorporate the probability of seeing the problem again *into your utility function* as you do above, then you can certainly finagle things to make the original statement correct, but that seems to be assuming what you’re trying to prove (that there’s utility in making multiple bets).

    I agree that it is correct much of the time in practice under reasonable utility functions that incorporate the extra utility penalty of losses to refrain from non-repeatable bets with high volatility.

    There are many bets where the combination of log or even more risk averse utility combined with transaction costs will mean that no bet is worth making at any size at some long but profitable odds, but where a repeated bet of some size *would* be worth making. I’m pointing out that Gray Area appears to be making a statement of principle out of something that is more accurately a heuristic.

    Of course, if you challenge the expected utility paradigm, then the analysis falls flat, but I don’t see any basis presented for challenging it on this point.

  • Michael Sullivan

    Harris basically argues that, due to the inevitability of collateral damage in modern warfare, it is inconsistent to find modern warfare ethically palatable and find torture universally despicable.

    Yes. Are there monsters who find modern warfare ethically palatable?

    To be less flip, you can make pretty solid arguments almost almost any *absolute* moral rule. So yes, ruling out torture absolutely 100% is probably wrong, as is ruling out modern warfare. OTOH, ruling it out in all but exceptional cases appears to be a good idea, and I think that applies to modern warfare as well, certainly to unilaterally *starting* wars.

  • http://web.mit.edu/sjordan/www/ Stephen

    I think the expected utility paradigm can be challenged in the following way. Suppose we take the point of view that ethics is essentially an empirical science. It consists of the formulation and testing of theories of what we want. If the theory predicts we would want something that we actually don’t then the theory is wrong. In a few examples, such as the previous post on Pascal’s mugging, the theory that we want to maximize expected utility seems to make predictions at odds with our intuitive desires and sense of reasonable ethics. (The original form of the Pascal’s mugging example seems to be dispatched with by an anthropic argument of Robin’s. However, as pointed out by Mike Vassar, one can construct a different example, where the mugger threatens to kill pigs instead of humans. As far as I can tell, a satisfactory resolution for the pig mugging paradox has not yet been suggested.)

  • http://entitledtoanopinion.wordpress.com/ TGGP

    Ethics is not an empirical science because intuitions are not universal and ethics as practiced often embraces counter-intuitive results.

  • http://web.mit.edu/sjordan/www/ Stephen

    I would argue that arriving at counter-intuitive results is not an indication that ethics is non-empirical. If, after considering some argument, example, apparent paradox, or whatever, you conclude that you embrace some counter-intuitive result, you’ve simply discovered something about your desires which is not obvious by merely examining your instantaneous intuitive gut reactions. The various hypothetical scenarios and paradoxes serve like telescopes to discover things about your desires which are not immediately obvious.

    When one works out the implications of an ethical theory and arrives at a counter-intuitive result, one of two things can happen. Sometimes the result is totally repugnant the theory is thereby falsified. In other cases, one finds that the initial gut reaction against the result is outweighed by other desires (e.g. justice, consistency, beauty) on which the theory rests. The initial reaction arose because the bearing of these desires on the scenario being considered were not obvious prior to devoting some thought to the matter.

  • Gray Area

    Sorry for the delayed reply.

    Michael: You can’t just define rationality in terms of utility expectation, you need to provide justification for why this sense of ‘rational’ is the right one. I take Pascal’s Mugging seriously because it’s the mirror image of Devil’s Lottery (vast probability of small gain vs small probability of vast loss), which is a game people are willing to play every day despite its (apparent) negative expected utility. I take these sorts of games seriously enough to feel the need to define utility in a way that avoids counterintuitive results in these games, and then talk about maximizing that, rather than the conventional expectation.

    To use a different line of attack: why SHOULD we use expectations without sufficient rounds to get reasonable convergence of our gain to expectations? People certainly don’t in every day life, and it seems reasonable to me.

  • http://www.mccaughan.org.uk/g/ g

    Could you be a bit more explicit about this “Devil’s Lottery”? (Is it something different from Pascal’s Mugging, e.g., or do you just mean that we’re constantly faced with a huge number of potential Pascal’s Muggings and choose not to be mugged?)

  • Gray Area

    Devil’s Lottery is a game where you spin a wheel with lots of numbers on it, and if 0 comes up you die (or a Really Bad Thing with an extremely large negative utility happens), while if any other number comes up, you get a dollar. The claim about the Devil’s Lottery is that even though it may have negative expected utility (in fact arbitrarily negative), it’s nevertheless rational to play as long as you don’t play ‘too many’ rounds (e.g. enough rounds where you start to converge to the expectation).

  • Nick Tarleton

    If it’s rational to play once, it’s equally rational to play once more after that, and so on ad infinitum (modulo diminishing returns of money). Sounds like the gambler’s fallacy.

    How do you mean people are willing to play this every day?

  • Gray Area

    Nick: It’s not obvious that rationality of playing once should imply rationality of playing more than once. Consuming alcohol or smoking (or experimenting with drugs) is a good example. It may be worth it to try these things occasionally, but most people know that you don’t want to consume repeatedly enough to develop problems doctors expect will develop.

    People play the Devil’s Lottery every day because most of our actions have small gains but also a small probability of death or crippling loss (e.g. driving on freeways, getting on an airplane, being exposed to carcinogens, drinking, etc.)

  • http://www.mccaughan.org.uk/g/ g

    I don’t think the Devils’ Lotteries that are widely played every day have negative expected utility; it certainly isn’t obvious that they do. I think it’s pretty clear that on average aeroplane users (including the ones who have died) have benefited from aeroplane flight. Crude calculation: your probability of death on a commercial plane flight is somewhere on the order of 1 in 10 million to 1 in 50 million, depending on what sources you believe. Most attempts to work out how much value we (individually and collectively) put on human lives (our own included) end up with a figure below $10M. So: do you think the average person on a plane gets more or less than $1 of benefit from being able to make that journey? Seems pretty clear to me.

    The numbers aren’t quite so clear for driving, because the risk is much higher and there are more realistic alternatives, but I’d bet the expected utility comes out positive anyway.

    I think you have a much better case when it comes to drinking, but that’s because the probability of a bad outcome isn’t so very low, which seems to me to make it not a “Devil’s Lottery” regardless of the expectation.

  • Gray Area

    This is a circular argument. People figure the value of a human life is 10 million dollars BECAUSE people play Devil’s Lottery games where their lives are at stake, and you assume they maximize expectations. With those assumptions, of course the value of a human life is finite, and fairly small.

    A more natural assumption, in my opinion, is that the utility of death is negative and infinite (that’s certainly how I think about it). Note that we use similar moral calculations where we replace ‘your death’ by ’10 deaths’ or ‘100 deaths.’ This may be explained by the well known scope insensitivity bias, or it may be explained by utility converging to probability of win for small number of rounds (so the negative penalty is irrelevant, and consequently the number of deaths is irrelevant).

  • J Thomas

    The utility of death can’t be negative infinity, because people sometimes choose to die. How long can you expect to live? If you’re already dying of some fatal disease and you choose to die an hour early, how much utility have you lost? An infinite amount? Only if you accept Pascal’s wager and suppose that God won’t like you any more if you don’t see it through.

    So, you have an expectation of some length of further life. A day. A year. Ten years. Seventy years. Whatever your expectation, you put a finite value on it or you would never ever accept a ride with somebody who might be a bad driver.

  • J Thomas

    _If I was torturing the guy in an attempt to stop the ticking nuke, after that situation had been resolved and he’s no longer needed, I would kill him afterward and then not release the information of what happened or make up some story of how he died (suicide is a good one)._

    This is old, but I just noticed the argument.

    So, at one point around 10% of our prisoners at Gitmo had committed suicide. Should we believe that the US government would never follow your plan? How many of the innocent prisoners, the ones who’d cause the most trouble if released, were killed and documented as suicides?

    The trouble with this reasoning is that once we start distrusting our own government we get big problems from that. The less we trust our government the less well it can protect us and the more incentive it has to lie to us, since we don’t believe in it anyway. Also, if you thing the government is lying to you, that’s a first step toward becoming some sort of activist and getting onto government lists and there’s no telling what sort of bad thing might happen to you from there.

    So it’s rational to completely discount this sort of thing. Of course the US government would never be as immoral as TGGP is. We mustn’t believe that could happen.

    [I’m being all ironical here but there are issues involved that I haven’t worked out.]

  • http://www.mccaughan.org.uk/g/ g

    Gray Area, I take your point about circularity, but I’m pretty sure my estimate of the value of my life isn’t strictly infinite. Obviously I wouldn’t agree to die in five minutes in exchange for a billion dollars, but that’s because the utility-to-me of a billion dollars is dependent on how much opportunity I have to use it. There are things I’d accept a shortened life in exchange for.

    On what basis do you say that the utility of death is minus infinity?

    (Here’s one possibility, which I think is tempting but clearly wrong: “There’s nothing that I’d accept in exchange for dying in five seconds’ time.” Let’s ignore altruism and suchlike for simplicity; with that proviso, I agree, most of us could truthfully say that. Does that mean that the negative utility we attach to dying in five seconds is infinite? Nope, it means that (again, ignoring altruism etc.) nothing *that can happen in the next five seconds* can provide us with enough utility to outweigh it. That’s hardly surprising.)

  • Nick Tarleton

    Nick: It’s not obvious that rationality of playing once should imply rationality of playing more than once. Consuming alcohol or smoking (or experimenting with drugs) is a good example. It may be worth it to try these things occasionally, but most people know that you don’t want to consume repeatedly enough to develop problems doctors expect will develop.

    The expected marginal utilities of these things are not constant – the N+1th beer has a different effect than the Nth. Drinking N beers and then stopping is still EU-maximizing.

  • Gray Area

    Nick: But that’s not what I was talking about, I am talking about habitual smoking vs trying one cigarette. The time scale of a ’round’ is different, perhaps, but the game is the same.

    g: Perhaps the utility of death is not infinite but it’s certainly very large. My main point wasn’t to establish consensus on how to value death (it’s an important unsettled problem, after all), but to call attention to the circularity of establishing utilities of some events.

    J Thomas: “Whatever your expectation, you put a finite value on it or you would never ever accept a ride with somebody who might be a bad driver.”

    This is precisely the circular argument I have a problem with. Perhaps people accept rides from bad drivers (on occasion) because they aren’t maximizing expected utility but have a very large (or infinite) utility for dying.

  • Chris

    Gray, are you sure one’s own death has a utility to oneself ? It would seem more like a constraint on the available utility, that which exists while living. So the guy who’s asked for what he would accept to die in 5 seconds is comparing a 5 second life to a longer one, not comparing a 5 second life to death.
    Ticking time bombs and torture : the utility cost of the torture is not in any of the more or less far fetched knock-on scenarios, it is in the destruction of our own identity, as individuals and as nations, with high costs, both internal and external. And to come back to an earlier point, those identities, for many individuals and cultures, for better or worse, are based on valuing empathy (we’re back to the affect heuristic), more than utility calculations. A consideration of the sociological impact to any of the cultures which have stepped over that line within say the last century, compared to their dominant values system, would demonstrate the point.

  • J Thomas

    “This outcome is very, very bad. In fact it’s infinitely bad. But I choose it anyway because I don’t care whether I get a result that’s infinitely bad.”

    There’s something peculiar about this stand but I can’t quite put my finger on what it is.