22 Comments

I do have two points:1. It takes more than a single life time to unfreeze. If he gets nothing he doesn't have an incentive to find a replacement.2. Or the worst option, we find out it's not possible to revive someone from the freeze.

Expand full comment

It would be simpler to just overcome the fear of death.

Expand full comment

Not sure why anyone wanting to be frozen would want to be awakened early. Seems like the world would have changed considerably in any event. It also seems likely our minds have limited lifespans just as our bodies so there may be less attained than desired.

Expand full comment

I disagree with that description. I'm not that hopeful about adding other info to save, but the cost is low so why not.

Expand full comment

Why would you be interested in being revived into, as Steven Poole noted from your book, a 'hellish cyberworld'?

Also, if you are interested in being revived as an em, then have you considered combining the preservation of your DNA and digital footprint as a backup plan?

Expand full comment

This product isn't for me, but the discussion is interesting. As I think you see, checking for fake revival is a different matter than the problem I identify, because my concern applies even when all the actors are honest. It occurs to me now, with the help of your 10:30 postscript, that a contract such as you propose should have a clause requiring the trustee to wait to revive until the technology has been used to X% success rate on contemporary (to the future) people who die and are revived into computers while they still have plenty of friends and relatives around who can talk to the em-person and say, yeah, that's grandma alright. Passing that kind of enhanced Turing test would only satisfy people with a pretty light requirement for personhood and personal identity, but is still way more informative than directing someone to ask your future-em if it is happy with its birth. If your brain is revived in a world where lots of brains are already being revived to the satisfaction of friends and relatives, the only risk you run that isn't shared by the future-people who are happy doing this is frozen-human shelf-life.

Expand full comment

I mentioned checking for fake revival. There are unlikek ti be tests for more general philosophical doubts. If you have strong enough doubts, this product isn't for you.

Expand full comment

"One simple fix is that, once you are revived, you rate the whole process on a 0 to 100 scale, and your agent only gets that percentage of the max possible prize."

This does nothing to ensure that the "you" who rates the whole process is the "you" who signed the deal. The entire idea of an "em" rests on few assumptions that may be false. Firstly, it assumes that a person is no more than a bundle of information-sharing connections in a body that may be instantiated in a computer without loss. That is a breath-taking assumption that conflicts with most everyone's feelings on the subject, and maybe not wrong for all that, but clearly we are a long way from knowing that it is true. If that is true, we must further assume that when someone dies, is frozen, and is revived in a computer, nothing is lost.

Now, if it turns out that there is some important aspect of a person that cannot be instantiated on a computer, or that cannot survive death, freezing, and revival to be instantiated on a computer, why would we expect the computer-based being that results to know, or care, about the important aspect of Robin Hanson that is utterly lost or transmuted? That being may either not know, or may not care, and rate the revival at 100%. If the dead Robin Hanson were to be able to meet this new Robin Hanson, he could be horrified to meet New Robin Hanson, and he could pronounce him a monster; still, the new Robin Hanson is happy to be himself, the trustee is rewarded and retains a clear conscience, and the purveyors of the technology may have no reason themselves to know they have failed.

In order for this scheme to work, we would need tests of both personhood and personal identity that can be run outside of the subjectivity of a single being, so that we can establish that the "you" doing the rating is the "you" who died.

Expand full comment

Okay, I've corrected the post with your numbers.

Expand full comment

Between CI and Alcor they say 2700. No idea about the rest but I wouldn't think it'd even hit 3000 total.

Expand full comment

Know the current number of total customers signed up?

Expand full comment

No, but maybe somewhere between the two extremes is reasonable. Examples where decisions aren't normally connected financially to their results include parenting, teaching school children, and making decisions on behalf of elderly relatives. My idea is also that a person is less likely to think of you as an "ally" if you signal distrust.

Expand full comment

Nitpick that I thought you'd find interesting. Cases have been accelerating lately. There are over 320 frozen between Alcor and CI, and KryoRus claims another 56. There are probably a couple of others too so the total is probably very close to 400 now.

Expand full comment

I'm not sure I care about the worlds where my future measure is zero. (From an evolutionary perspective I ought to, but maybe not from a personal perspective.)

On the other hand, cryonics is (fairly) cheap insurance. Certainly if it has even a small chance of success (say, 1%) then my future measure is many orders of magnitude larger than if I don't sign up.

So I guess I should sign up. You with Alcor?

Expand full comment

We arrange things to give people incentives all the time. Do you think those are all mistakes, and we'd do better to not give incentives but just trust to their good natures?

Expand full comment

All of this assumes that there's little trust between you and your agent. Would all this be necessary if, say, your descendants are the ones making the decisions? Of course, you can't guarantee that your descendants will survive, or be willing to do it. But I'd be worried that a complex reward system sends a message that you don't trust the person, and that they'd be more likely to act selfishly as a result.

Expand full comment