Everyone without exception believes his own native customs, and the religion he was brought up in, to be the best (Herodotus 440BC). I’ve given about sixty talks so far on the subject of my book The Age of Em
That seems to be two questions: whether a virtual reality will be provided for ens,; and whether they will have real qualia.
> "your complaint about the em world"
I can only answer for myself.
While the point is complex and nuanced, I think I can simplify:
1. I would prefer not to be captured and enslaved.2. If I were an EM, it might be put into such a situation (one of enslavement, which I do not prefer). Or, I may be forced into some kind of power vacuum (war / political maneuver) where some em's enslave others. I would be required to play-to-win, which sounds difficult.3. People may be able to copy me against my will, and I would care about those copies (as much as "I" care about "myself"). They could be tortured, perhaps indefinitely. Torture would not cost tyrants very much. Or they could just "kill" me by shutting me off.
So, I care about my EM-self, whom I perceive as in a state of vulnerability (approaching mortal peril), or at the very least in a state of extreme risk.
However, it goes on:
4. The representation of people as data (ie, mathematically), is a kind of jarring reminder that "I" and "other humans" are, in fact, ''''on average'''', the same.
With all four, I'm at a confused place -- I don't know how much I should care about 'all other EMs' (or the entire EM-infrastructure). However, it seems that there is a risk that I should be caring about them as much as I care about myself. And it seems the potential risk to them is very great!
Hence the complaining! I am neither surprised that  others are complaining, nor that  they don't really know how to articulate the risks "they" are being placed into, and instead just Shout Stuff (they are looking -signaling-style- for allies to help them fight this new threat, no doubt).
When humans feel they are on top they will feel free to voice dislike of ems, but once ems are on top most humans will be a lot more cautious about dissing their "overloads".
Calling out homo hypocritus might delight the choir here but I hope (and trust) that for the sake of your prognostications about the impending age of EMs that you haven't assumed that meat humans will 'suddenly see the light' about any of this!. Won't complete domination by virtual human beings be a very bitter pill for the species to swallow, given its 200k year reign atop the food chain?
I imagine that a great deal of the labor done by all those ems would involve writing good task-specific AI, which will do most of the work that the world needs doing.
Perhaps folks who now do this for a living can testify whether this is exciting or boring work.
Dunno... Predicting how this particular wave is going to break seems to me to be a lot harder than, say, predicting in 1950 what the economy of 2015 will look like. (Right now, I'm reading Vonnegut's "Player Piano", and while some of his elements of dystopia are spot-on, many of them are completely incorrect.) There are just too many contingencies that might dramatically change the outcome. Remember the parody biology final exam question, "Create life, using the materials provided. Estimate the differences in subsequent human culture if this form of life had developed 500 million years earlier, with special attention to the British parliamentary system. Prove your thesis."
"...then either you must say the typical human life was not worth living..."
Pretty much this. Life is suffering. The small pleasures are supposed to outweigh not only the small pains in aggregate, but all the deliberate torture and other severe suffering in addition. That never adds up.
The only thing that could change this is some kind of hedonic enhancement combined with sophisticated pleasure wireheading. But no one finds that attractive, and the economic pressures will select against it.
Ah, but you are falling into the trap that an EM is a person or at least a black box whose emotions are driven by internal processes and can only be influenced by outside actors. We know enough about the brain right now to know the gross structures where emotions like motivation originate. There is no reason to think that when we finally can emulate a fully functional human brain we will not also have learned enough through experimentation to be able to directly influence thoughts and emotions. In essence, what an EM will want to do more than anything else, will be dictated by the operator. Certainly, there will be some EMs that will not be so tightly controlled, as there might be something lost by removing a part of their psyche, but this is all just so much speculation right now. I'm sure we'll figure it out in time.
They probably wouldn't have any motivation then, or only motivation to work on what they find interesting. I think the story may be wrong as it is predicated on copying being cheap while copying in terms of costs of this virtual world will be fantastically expensive and not at all an option for an em.
You might take a less-consequentialist position that even though our ancestors lives were worth living, and we don't regret their existence, it is immoral for us to create similar creatures. Because it matters less that we do good than that we show ourselves to be virtuous people. I spit on that view, but yeah, you could hold it. :)
Assuming that we ever learn how to fully and faithfully create a model for emulating a human brain, and have reached a level of technological progress that would make emulating a human brain worthwhile, we would likely have sufficient knowledge to alter that emulated mind to make its need for "quality of life" irrelevant. Sensory information would be limited to whatever minimum is required to keep the EM on task and at work.
Thoughts and feelings deleterious to producing work output can simply be removed or routed around. Perhaps an overriding compulsion to stay on task can be installed to eke out a little efficiency. Reward for work well done, or punishment for sloppy work can be accomplished with a few keystrokes by the operator. If an EM becomes belligerent, the sim can likely be modified to route around the problem or the EM can be erased and restored from a previous "save state" like any other program. EMs could be the perfect worker.
I'm not sure that consistency requires thinking that if ems have a lower quality of life than makes life worth living, that most humans historically have had lives not worth living too.
You might reasonably argue that morally there is a higher standard for ems, because we meathumans would be directly responsible for creating them and directly responsible for the foreseeable consequences thereof. Past humans just sort of evolved and persisted, no one/no group was responsible for their kinds of lives as a type.
Of course, humans bring other individual humans into being in a sense too. But in the past it was not reliably possible to prevent this through contraception, and now that you can it does seem incumbent upon would-be parents to prevent creating children who will likely lead bad lives, even lives not as bad as the average past human but that would be quite bad by present standards.
I think that like me, Bostrom greatly prefers ems first but things that's very unlikely to be the default outcome, though possibly worth fighting for.
I imagine that a great deal of the labor done by all those ems would involve writing good task-specific AI, which will do most of the work that the world needs doing. Running AI will be far more computationally efficient than running ems, so ems will certainly not be truck drivers, miners, law clerks, etc. In fact, the coming of ems might actually jump-start the golden age of AI, because many more able minds can be assigned to improving it, and because once we see the details of a running mind emulation, we might be able to reverse engineer some neuro tricks and fold them into AI routines for added flexibility.
Many ems will spend their last days coding and training an AI that can do their job adequately, but only needs a millionth of the CPU power that they require. Because of these potential savings, and because of the increased capabilities of em-coded AI, there will be lots of pressure to replace most working ems with AIs. Many of the ems that remain will become AI supervisors/overseers, intervening when AI routines report uncertainty or unexpected novelty. Since they will effectively be the commanders of distributed armies of AIs, a good name for their job would be: daemon-master.