In the recent (good) movie Star Trek, the heros spend several years training in the Star Fleet academy, and then spend a few days on their first real mission, which turns out to be an excellent learning experience. Should there be a sequel to this movie, the heroes would seem to start out especially well-positioned compared to most Star Fleet officers. In fact, if it were possible, Star Fleet might be tempted to fill much of their fleet with copies of these few heroes.
Yes even the hard work month is worth living.
Ohh maybe I misinterpreted you. Were you specifying that even the hard work month was above the zero threshold?
How about EM rape fantasies? You don't think someone will get a kick out of keeping a "real" person in a virtual cage and sadistically sexually torturing them?
I think you should try and formalize your intuitions about the goodness/badness of various possible worlds to note the restraints this imposes on your valuation.
In particular if, as you seem to argue elsewhere, there is no special notion of identity that makes it less good to have a copy enjoy themselves than to do the enjoyment yourself, then the overall goodness/badness of the world can't be a simple function of whether each life in that world seems desierable.
In particular if most experienced months are very bad, regardless of whether they are experienced by a copy of someone who just took a vacation, it seems you are committed to the conclusion that the world is bad for the same reason it makes the world better to send a copy of yourself off to have a good time for a month. Even if you give each copy a nice retirement of several years but assume civilization continues to grow it's computing capacity so fast as to make most months experienced in any finite time unpleasant months you should conclude it's a mostly bad world.
As an aside it seems to me troubling that so many of your EM arguments seem to appeal to relatively unexamined intuitions about a good life or the like that seem to be stretched beyond their useful domain of applicability in evaluating the goodness of an EM world. I'd argue that a much more abstract approach is called for here.
It'll also work great until there's a paradigm shift in business models. It maybe cheaper to delete a workforce of obsolete EM's and replace them with a whole new type rather than bother with a "patch".
Austrian economic theory will probably be particularly hard on EMs, and our future selves will probably be pretty expendable, without some notions of basic post-human rights (rights for retraining?).
And it'll work great until everyone learns how to exploit this human. Monocultures are particularly susceptible to parasitism.
To Robin: What was good about the most recent Star Trek movie?
It depends a lot on the value of optimization as compared to diversity. I'd guess evolution gives us a hint, eg we could have been as similar as social insects but we aren't. Depends on the nature of the environment, of course. In formalized human built civilized systems, optimization rules and diversity is worthless. Exploring a hostile universe, diversity is fairly useful.
I've just written a brief dialogue of how I imagine the first conversation with the very first em might go:
I agree that train-select-copy is one plausible scenarios for ems/uploads. It is also a possible scenario for AIs (not necessarily very general AIs, either), if the AI has some sort of learning algorithm that builds internal data structures which are easier to build through learning than through explicitly coding them.
If you think the singularity will be powerful enough to resurrect our consciousnesses somehow, you can't discount the possibility that it will do so for malevolent reasons, perhaps only for the joy of torturing us (maybe it will have a grudge against humanity, its creator); maybe there's nothing any of us can do to escape an eternity of pain and torment at the hands of a sadistic future AI.
Have a nice day!
People do lots of things that don't necessarily make sense in simplistic terms, since their motivating optimization functions vary widely, sometimes in sick ways.
The possibility of creating virtual hells makes me worry about ems. While virtual hells may not (usually) make economic sense, sometimes economic optimization doesn't happen in the short or medium term (long enough for em's stuck in a hell running at high-clock speed!). So I wouldn't be so sure that the rationalism of markets will prevent virtual hells all by itself.
The only thing I'm sure about is that this is really an unpleasant conversation topic, and not one that you could safely have at a dinner party. Virtual hells of seemingly endless torment? Yech.
It seems obvious that this would be one of the main advantages to ems. The idea that people would train several and just select the top one to do actual work is a good one.
it doesn't seem unlikely that people will devote large amounts of resources to simulating suffering as accurately as possible. they're called war movies and video games.