Avoiding Death Is Far

Avoiding death is a primary goal of medicine. Avoiding side effects of treatment is a secondary goal.  So it makes sense that in a far mode doctors emphasize avoiding death, but in nearer mode avoiding side effects matters more:

The study asked more than 700 primary-care doctors to choose between two treatment options for cancer and the flu — one with a higher risk of death, one with a higher risk of serious, lasting complications. In each of the two scenarios, doctors who said they’d choose the deadlier option for themselves outnumbered those who said they’d choose it for their patients. … Two hypothetical situations were presented: one involved choosing between two types of colon cancer surgery; the less deadly option’s risks included having to wear a colostomy bag and chronic diarrhea. The other situation involved choosing no treatment for the flu, or choosing a made-up treatment less deadly than the disease but which could cause permanent paralysis. (more; HT Tyler)

As other people are far compared to yourself, advice about them is more far. Similar effects are seen elsewhere:

One study asked participants if they would approach an attractive stranger in a bar if they noticed that person was looking at them. Many said no, but they would give a friend the opposite advice. Saying “no” meant avoiding short-term pain — possible rejection by an attractive stranger — but also missing out on possible long-term gain — a relationship with that stranger.

Since fear of being laughed at for doing something weird is also near, far mode also seems the best place to get people to favor cryonics. A best case: folks recommending that other people sign up at some future date. How could we best use that to induce concrete action?

Added 11p: Katja offers a plausible alternative theory.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Wophugus

    I don’t avoid cryonics for myself because I don’t want to look silly, I avoid cryonics for myself because I don’t want to waste resources on an eternal-life fantasy peddled by flim-flam men. I may be wrong in my assessment of cryonics, but I would still guess that mine is a much more common reason for avoiding cryonics than is a desire to appear normal. By way of comparison, trying to secure eternal life by joining a major world religion would look very normal, but it still doesn’t appeal to me for the exact same reason securing longer life via cryonics doesn’t: it strikes me as irrational.

    I guess the takeaway is the cryonics advocates should spend more time trying to prove that a frozen human brain can be reverse engineered into its working state and less time worrying about how to get people to analyse cryonics in far mode.

    • nazgulnarsil

      being conned by flim flam men would make you look silly no?

    • http://lukeparrish.rationalsites.com/ Luke Parrish

      As it happens a lot of time has already been spent on arguments as to why the vitrified human brain can (or, perhaps, cannot) be brought back, but they are only possible to analyze in far mode.

    • http://juridicalcoherence.blogspot.com Stephen R. Diamond

      I avoid cryonics because (among other reasons) its potential for harm is, to my mind, far greater than its potential for good. That is, if I cared (which I don’t) about the welfare of my disembodied essence, I would fear possible consequences more than I would welcome them. In exchange for a possible “heavenly” immortality in a future world, I sustain the possibility of suffering a (“near”) eternal hell.

      The point is made vividly in the science fiction novel “Broken Angels” by Richard Morgan, when the protagonist, discussing a form of near-immortality with an enemy soldier:
      “I’m not going to kill you, Deng. That’s what I said. I’m. Not. Going to kill you, I shrugged. Far too easy. Be just like switching you off. You don’t get to be a corporate hero so easily….

      “I’ll leave you here, Deng. I mean we’re in the middle of the Chariset Waste, Deng. Some abandoned dig town…. I’m just going to leave you plugged in.”

      The protagonist explained that in a digitized state, his power pack would last decades, centuries virtual time.

      “Which is going to seem pretty fucking real to you, sitting in here watching the wheat grow. If it grows in a format this basic. You won’t get hungry here, you won’t get thirsty, but I’m willing to bet you’ll go insane before the first century’s out.”

      Cryonics is pathological optimism, on so many levels.

      • http://www.uweb.ucsb.edu/~criedel/ Jess Riedel

        Presumably, you think that the expected negative utility of “hell” outcomes greatly–not just slightly–outweigh the expected positive utility of “heaven” outcomes. Given non-negligible chance that there is some existential event in your natural lifetime (e.g. AI singularity, encounter with advanced aliens, etc.) which could lead to a hell, why don’t you commit suicide? Or are your probabilities (fine-)tuned so that the expected utility is positive for living a natural lifespan but negative for being revivable in a century?

        If almost everyone else on the planet were signed up for cryonics, and it cost you $100,000 to prevent yourself from being frozen, would you really so choose? It seems hard to believe you would.

        (These are honest questions, not just attempts to point out possible inconsistencies in your position. I think these eternal-hell arguments are powerful, so I’m interested in what you think.)

      • Hedonic Treader

        Or are your probabilities (fine-)tuned so that the expected utility is positive for living a natural lifespan but negative for being revivable in a century?

        This is a very interesting question. To answer it for myself, my personal probabilities are roughly tuned so that the expected utility is positive for living the next 5-10 years while updating on the evidence along the way. This factors in probabilities of severe suffering from accidents, sudden onset of illnesses, crime etc.

        One problem is that phenomenal consciousness is not bound to a self or individual personhood in the way most people think it is. Experiences of affective valence such as pain or pleasure are actually very local phenomena. There is no metaphysical self that integrates them over a full lifetime for one person. As a consequence, utilitarianism becomes much more rational than the types of egoism implied in these chains of thought. Unfortunately, this violates strong intuitions about our self-image as unitary persons.

        Given the fact that sentient existence itself is usually involuntary to a great degree, and error signals such as agonizing pain can be quite severe in their negative valence, I would prefer if sentient life in general did not exist. It remains to be seen if there is some fundamental technological fix for this, but the probabilities are dense. Expected value seems generally negative in my current intuitive estimate, which is why I welcome increases in existential risk over decreases.

        Consequently, I have no personal interest in cryonics. If it were feasible, and I would have to pay $100.000 to prevent it (given I had that kind of money), I would choose to pay.

      • http://lukeparrish.rationalsites.com/ Luke Parrish

        Relevant link.

        It’s interesting that the reasons cited against it by one group of cryonics avoiders is so contrary to the reasons cited against it by another. If the probability of it working is too low, the probability of reanimation in a negative circumstance is much lower still.

        But look at it this way. Even if not getting cryopreserved avoids having it happen to you personally, that does nothing to prevent it happening to billions of other people. In fact if it happens to you that means you live in a universe with a high probability of it happening to begin with. So your best bet is to work towards friendly AI if you are scared about eternal-torment scenarios, not commit suicide (which only removes one person from the equation).

      • Hedonic Treader

        So your best bet is to work towards friendly AI if you are scared about eternal-torment scenarios, not commit suicide (which only removes one person from the equation).

        Creating a super-machine that can be switched on, and then we “Win”, as Eliezer Yudkowski expressed it recently, sounds too good to be true. Planning fallacy comes to mind. I sometimes wonder if he already has a gantt chart for utopia in his drawer (just kidding).

        Let’s analyze this problem a bit further. Various aspects need to come together to make supertorture scenarios possible (I wouldn’t call them eternal-torment scenarios unless they’re actually open-ended). For one, consciousness needs to exist. And these consciousness patterns must include error signals of serious negative valence. And they must propagate over time despite their aversive nature. In other words, there must be suffering minds who exist involuntarily and can’t do anything about their suffering and/or existence.

        So the solution space roughly falls into these categories:

        1) Prevent consciousness from existing.
        2) Prevent seriously negative error signals from occuring.
        3) Ensure the availability of successful counter-measures by the minds that contain the error signals (e.g. changing situational context, switching the error signals off, commit insta-suicide etc.)

        Regarding 1), one could try to come up with intelligent agents that aren’t conscious and out-compete all other sentient life, or to kill off all replicators, i.e. end all sentient life and its evolutionary basis. And hope that no alien life exists. This one seems easiest.

        Other than that, a stable benevolent singleton (Friendly AI or otherwise) could do the trick by tightly enforcing 2) and/or 3). This approach would be compatible with the perpetuation of positive utility.

        A remote third option would be to re-design sentient minds so that 2) or 3) is proven to be sure, while killing off all other minds/replicators and using error-correction mechanisms to guarantee that future evolution can’t re-evolve minds that don’t contain fail-safe 2) or 3). This one seems hardest.

      • Hedonic Treader

        Ah, and of course the “error-correction mechanisms” in the third solution would have to occur on the memetic level as well, i.e. it must be sure that no one creates artificial minds without 1) or 2) later. Seems virtually impossible.

      • http://www.uweb.ucsb.edu/~criedel/ Jess Riedel

        Even if not getting cryopreserved avoids having it happen to you personally, that does nothing to prevent it happening to billions of other people. …So your best bet is to work towards friendly AI if you are scared about eternal-torment scenarios, not commit suicide (which only removes one person from the equation).

        Well, that depends on how much you value other people when it comes down to brass tacks. I think almost all of us, if presented with an immediate, believable possibility of eternal torment (i.e. near mode), would sacrifice just about everything and everyone to save our own hide.

      • http://lukeparrish.rationalsites.com/ Luke Parrish

        I think almost all of us, if presented with an immediate, believable possibility of eternal torment (i.e. near mode), would sacrifice just about everything and everyone to save our own hide.

        What bugs me is that it seems like the chances of supertorture in a post-cryonics survival scenario are easily treated as near whereas the chances of survival and long-term satisfaction are usually treated as far.

      • mjgeddes

        Creating a super-machine that can be switched on, and then we “Win”, as Eliezer Yudkowski expressed it recently, sounds too good to be true. Planning fallacy comes to mind. I sometimes wonder if he already has a gantt chart for utopia in his drawer (just kidding).

        SAI_2100: Open your eyes

        Cryonics Patient: (groggy): What? Where?

        SAI_2100: Open your eyes! Abre los ojos!

        Cryonics Patient: Did we win?

        SAI_2100: You betcha kiddo. #winning, as a human called Charlie sheen likes to say

        Cryonics Patient: Yahoo!

  • http://lukeparrish.rationalsites.com/ Luke Parrish

    I’ve been thinking a lot about this very topic lately. Cryonics has the problem that it overlaps both near and far, tending to require quite a bit of thinking on both sides of the fence. If it could effectively become all-far — an abstract cause on par with SETI or preventing global warming — there would be a better chance at attracting a large following and comparable amounts of financial support.

    Unfortunately, people tend to see any action they take directly on their own behalf, especially if premeditated and not as part of a group activity, as near. So making their own arrangements is out of the picture for as long as they remain in far mode.

    To work around this, I am thinking a plausible solution would be organizing group fundraisers on the behalf of underfunded cases which have already been performed. Participants would be asked to donate in order to cover the cost of keeping the preserved individual (who is now more far mode than even a corpse) suspended indefinitely until science can bring them back. Once they’ve donated to the cause, they will be more likely to sign themselves up down the road, or at least sign a consent statement for others to do it for them.

    To make sure the patient is never in any danger of thawing out from lack of funding, there would ideally be a “temporary fund” set aside before the case is accepted, which pays for initial and ongoing costs during the fundraising phase, and is reimbursed after the patient’s required principal amount is attained. At this point the patient “graduates” (with much fanfare and congratulations to their surviving friends and relatives) and the temporary fund now has one more available “slot” for another patient to occupy.

  • kebko

    I heard this on NPR today (forget which show), and their take-away was that since doctors give patients different advise than they would take themselves, the doctors need doctors, too, so that they will be directed to take different measures than they would actually choose for themselves….I am not making this up.

    • http://www.uweb.ucsb.edu/~criedel/ Jess Riedel

      It’s not at all obvious that near-mode decisions are the “better” ones (although here I’d say they’re more likely to be better than worse). Irrationality means that choices imposed through paternalism—whether by doctors, the state, or actual parents—can be better by almost any metric than those an agent would make for themselves.

      • http://www.uweb.ucsb.edu/~criedel/ Jess Riedel

        The social situation from the original post is a good example. Does anyone doubt that friend’s far advice (approach the attractive stranger) is better than the standard near choice (sip one’s beer)?

    • Doug S.

      Well, lawyers are indeed generally advised not to represent themselves at trials…

  • http://www.brazzy.de/ brazzy

    This provides an interesting contrast with my girlfriend’s experiences. She works as nurse in a cancer ward and repeatedly told me about cases where doctors (not primary care, of course) chose treatmets with massive (and to them quite visible) side effects in cases where the life-prolonging effect was certain to be very limited or pointless:

    One patient was given chemotherapy even after showing massively adverse reactions, with the result that he spent a week in indescribable agony before dying from the side effects – but cured of the cancer.

    Another was guaranteed to die within a few weeks of an untreatable cancer, but when she suffered from a Sepsis, was given highly invasive surgery that left her in a coma and probably dead within days.

    • Hedonic Treader

      Nice. We really need more pointless torture on the planet.

  • http:/juridicalcoherence.blogspot.com Stephen R. Diamond

    “Since fear of being laughed at for doing something weird is also near, far mode also seems the best place to get people to favor cryonics.”

    However, hypocrisy is said to be the function of far mode. Since hypocrisy serves to avoid social disapproval — ridicule being a form of it — your interpretation of the modes is inconsistent.

    • http://hanson.gmu.edu Robin Hanson

      Yes, that worries me.

  • Eric

    Sounds like you are sold on cryonics. I don’t know much about the science, but even assuming that it is solid I would suspect that the incentives of any cryonics provider would be to only stay in business until the cost of ongoing Preservation exceeds the average new business signup profits. Now they may have to breach certain contractual arrangements but who will sue them? Even assuming that they did not behave strictly rationally most businesses just don’t survive for very long.

    I have put forth this perspective on lesswrong and found that they reacted with the outrage that one expects from true believers. It was very disappointing given their claimed rationalism.

    Robin are you under this spell as well?

    • http://lukeparrish.rationalsites.com/ Luke Parrish

      assuming that it is solid I would suspect that the incentives of any cryonics provider would be to only stay in business until the cost of ongoing Preservation exceeds the average new business signup profits.

      Are you referring strictly to economic incentives here? What about far-mode incentives like the imagined duty to the patients? Or how about if the cryonics providing organization is structured to only accept members into power if they demonstrate loyalty to the idea of keeping the patients preserved?

    • http://www.gwern.net gwern

      > I have put forth this perspective on lesswrong and found that they reacted with the outrage that one expects from true believers.

      How fortunate that those signed up with the Alcor *Foundation* don’t need to ponder that question.

    • mycroft65536

      How long do you think this implies that a cryonics organization would operate? What if evidence was presented that such a firm continued well past that point? Also you left out revenue from membership fees.

      • Eric

        Yes thanks for the correction. Add membership dues to my calculation. If I was presented with evidence that these particular organizations had especially solid footing so that they might survive for centuries I would happily rethink my stance. The idea of my dead body rotting in a graveyard is not appealing.

    • http://daedalus2u.blogspot.com/ daedalus2u

      I think the point is that as the economics of cryopreservation change, new startups won’t have the legacy costs of the old facilities and so will be able to offer the same or better service to new customers at a cheaper price.

      The clients who were frozen first will be the most expensive to revive because they have the most damage to fix. Why would someone frozen in 2060 pay to subsidize someone frozen in 2010? A competitive cryonics startup facility in 2060 will be cheaper and provide better service than one started in 2000.

      A business model that depends on unpaid labor and dues of current members is not sustainable if those members can get a better deal elsewhere.

    • Tim Freeman

      I don’t know much about the science, but even assuming that it is solid I would suspect that the incentives of any cryonics provider would be to only stay in business until the cost of ongoing Preservation exceeds the average new business signup profits. Now they may have to breach certain contractual arrangements but who will sue them? Even assuming that they did not behave strictly rationally most businesses just don’t survive for very long.

      That’s a real risk. If that’s a real concern, you might want to have a look at the structure of Alcor’s Patient Care Trust Fund, see:

      http://www.alcor.org/AboutAlcor/patientcaretrustfund.html

      There is a requirement that the trustees of the fund have relatives suspended at Alcor. The trust fund and that qualification rule for trustees doesn’t get rid of the risk, but it might make it be less than you were assuming.

      On the other hand, you apparently didn’t do this already, otherwise you’d have mentioned it. Why haven’t you investigated this question already?

      The answer to that probably has more to do with your values than the state of cryonics, since it’s a decision not to discover the state of cryonics.

  • http://lukeparrish.rationalsites.com/ Luke Parrish

    I wonder if people are simply more sensitive to pain in near mode? That sounds like a testable prediction.

  • Pingback: Why death for oneself, suffering for others? | Meteuphoric

  • Nikki Olson

    How much does empathy effect near/far recommendations for others? For friends I am close with I think I can recommend as if they were ‘near’ to some extent.

    When it comes to cryonics, it is difficult to have an impact when recommending something so expensive.

    If cost is a main inhibitory factor, at least in the intelligent, forward looking community, an important quesiton is, “how many people would sign up if it were only $5,000”?

    • Greg Colbourn

      If you are young, it is not expensive. I’m 30 and pay for it with life insurance ($14 a month); membership fees are $120 a year (Cryonics Institute). So it can cost less than a dollar a day. Even with all the caveats, and supposing there is only a 1% chance I’ll get frozen and revived in a decent state, I still think that’s money well spent (when considering the alternative).

      • Nikki Olson

        Yes, that’s true. Cryonics Institute is much cheaper.

      • http://www.uweb.ucsb.edu/~criedel/ Jess Riedel

        It’s cheap when you’re young because the likelihood you die when you’re young is small, so the probability of being frozen is likewise small. As you get older, when you are more likely to die, the cost will go up. You shouldn’t be measuring the cost in $/year, you should be measuring the cost of one freezing.

        In particular, your chance of dying during your 10-year term life insurance is only of order 2%, so your chance of being revivied is much less.

      • Doug S.

        But it is very likely that someday you will indeed be old, and then it will be expensive.

      • Greg Colbourn

        My Insurance is 35 year fixed term. So it will start to become expensive when I’m 65. By then, there may well be other possibilities, such as much better anti-aging medicine, a big decrease in the cost of cryonics through widespread adoption, uploading, or even the collapse of civilisation to a point where cryonic preservation/storage is no longer feasible.

  • Tony B

    There is of course always the possibility that someone sees pain for themselves as more severe than pain for others, and thus there are more circumstances that someone would choose less pain over less death for themselves than for others.

  • Robert Van Kirk

    It would seem that a variety of unhealthy behaviors, like consuming any amount of sugar, would have “positive side effects” in the near but also cause death in the far. However, few people I know are militant in their diet as to avoid all fructose and sucrose, gluten, casein protein, processed meats, industrial plant polyunsaturated fats, etc.

  • http://www.medicalskeptic.com Duncan

    A possible explanation:

    Physicians are very familiar with the embarrassment and hassles of enduring long term side effects from various treatments and therefore the majority accept a higher risk of death to lessen the risks of serious long lasting side effects.

    Physicians tend to think that patients and their families would rather live with long term side effects than to die.

    n addition, patient survival even with side effects is considered a ‘success’ while the death of a patient is always scored as a physician ‘failure.’

    And then there is an additional cynical explanation that a patient with serious long term side effects is a patient who needs continuous medical care thereby increasing physician monetary gains as well as influence.

  • csf

    Robin (or Katja) —

    It’s not obvious to me through introspection that “near” is more afraid of harm while “far” is more afraid of death. If you ask me abstractly whether I would undergo the riskier cancer surgery, I think I would say yes. But if I imagine you give me a button that will kill me right now with low probability if I press it and will harm me for sure if I don’t, all I can think is, “I don’t want to die!”.

    I also think more people say they would prefer euthanasia over being bed ridden than will actually make that choice when the situation arises, though I can think of other biases that may be at work here, such as overestimating the degree to which your happiness depends on your situation. (Near/far is a bias insofar as both can’t be right on any one question.)

    Did you base this on strong evidence, or did your introspection just return the opposite result as mine? Either way, Katja seems to strongly agree with you, since she skipped straight to explaining why this is true, without a word on whether it is true.