Paul Krugman in ’06: Serfdom in Russia wasn’t an institution that dated back to the Dark Ages. Instead, it was mainly a 16th-century creation, contemporaneous with the beginning of the great Russian expansion into the steppes. Why? … There’s no point in enslaving … a man unless the wage you would have to pay him if he was free is substantially above the cost of feeding, housing, and clothing him. … Indeed, by 1300 – with Europe very much a Malthusian society – serfdom had withered away from lack of interest.
Aron, I too am skeptical of this premise. To put another spin on it: In order for us to be able to replicate human minds in the machine, we'd need to be able to replicate all of their major parts. The brain is not a homogeneous jelly of neurons, there are of the order of 100 anatomically distinct regions in it. In order for uploading to work, we'd have to be able to engineer all of these architectures correctly (as well as a scanning process to get individual people, which has to work on all types of neurons, and which is only needed for this purpose - but I digress...). If we had a subset of the regions working, and attempted to upload someone, we'd automatically be making major changes, simply because of the missing pieces.
Consider the economic competition with biomemetic AI. For biomemetic AI to start occupying economic niches, it needs for us to understand useful pieces of brain architecture well enough to start applying them. To a significant extent, this is already happening. AI in the areas of computer vision and hearing is already useful enough for a good many applications. I expect the most probable course will be to understand brain modules in a highly uneven way, adding them to our computational tool set. We could easily reach a takeoff point where these tools add up to enough capability to take over all economically important human tasks (including closure - the tasks of building more of the hardware and of further extending its capability), without ever having enough fidelity to really upload someone.
In this scenario, the time between uploading human minds and modifying them turns negative. Modified pieces of human minds get built and applied first, and extended till the modified and optimized systems dominate the economy, without ever uploading a single complete human.
@dirk"It seems far more likely they would first set themselves on the task of building robotic bodies capable of experiencing human-like sensations and then proceed onward to procuring real human women (flip the script if they’re prototype was gay""If they accomplished that my next concern would be that the ems would enslave the humans — at least the female humans — and not the other way around."
Are you assuming that ems can/will only be taken from male humans, or that the sex issue will only be a problem with male humans?
In any case there are people out there who are happily asexual, which should suffice. One assumes there will only be a small pool of "optimal" em donors who get copied millions of times, Henrietta Lacks style, so qualities that are merely practically universal aren't a problem. And we can hormonally kill peoples sex drives without much difficulty, even if we were completely unable to do it directly to a computer simulation.
First-generation ems would probably prefer embodiment in a virtual reality environment where they can have sex with other ems. However, it seems more likely to me that the more efficient and thus competitive ems (at least for most tasks) would develop ways to deactivate the sex drive in order to focus on their work rather than wasting their cpu time on pornographic simulations. If not all ems do this, porn could be used as a form of information-warfare, targeting populations that one wishes to cause to consume cpu time nonproductively and thereby gain a competitive advantage over.
Perhaps it was meant to be a counterfactual prediction - as in, 'many on hearing of these problems would simply propose creating additional regulations to ensure ems get higher wages'.
I'd agree that that would be the reaction of many upon hearing of the problem...
> Such a sysop would have access to inspect any computational resources anywhere in the world, but would otherwise not disturb free interactions. Of course, building this system would itself create a single point of failure with significant risks for harm.
One might wonder whether such a system is possible even in principle. Access and inspection disturb computations. Cache misses, latency, debugger hooks... Remember, you can steal private keys just by measuring cache hits. Such access may be deeply harmful and inefficient.
As well, it may not even solve the problem. Examining an arbitrary computation and asking whether it's 'illegal' instantly runs afoul of Rice's theorem/the Halting Problem. And what happens if the computation is homomorphically encrypted? For that matter, what if it's an encrypted multi-party computation? Or even just a parasitic computation?
Basically I'm saying that ems are a chicken-egg problem when it comes to sexual evolution. Hanson's theory relies on post-em evolution, but if ems are brain scans it means the first generation will feel itself to be all too human and have all too human desires. An army of quadriplegics will be awfully hard to motivate. Even the most brilliant quadriplegic of our time left his wife for his nurse. He had immense fame, a mind AND a human face to pull that one off. A brilliant mind in the form of electrons is going to be extremely bad at the art of seduction -- yet that mind will be just as interested in fucking and sucking as any other human mind. It seems far more likely they would first set themselves on the task of building robotic bodies capable of experiencing human-like sensations and then proceed onward to procuring real human women (flip the script if they're prototype was gay)-- whatever it takes to do so -- because that is what their human minds will desire.
Or another possibility is the altered-states scenario. Humans hallucinate when placed in sensory deprivation chambers for long. A first generation em will exist in the ultimate sensory deprivation chamber. How to get those brains to work on real-world problems if they are likely to spend a lot of time hallucinating?
Thanks for the link. Yeah, that's about what I am.
It strikes me that the initial generation of ems will be very depressed about their circumstances: a human mind without a human body -- and fail to get any work done. Perhaps they will spend most of their time looking at porn since they can't achieve satisfaction. They can't even hope to ATTRACT the opposite sex, so why would working be worth the effort for them? Or to the extent they would work: wouldn't they spend their time interested in solving their own problems, such as how to simulate a human body? If they accomplished that my next concern would be that the ems would enslave the humans -- at least the female humans -- and not the other way around.
As Robin suggests, sex for our descendents won't be the same as it is now, but for the ems in vats not to be miserable sex would have to be so vastly different as not to require a human body at all. Seems unlikely.
Many propose regulations to address [human emulations in software getting near subsistence wages].
Um, did you just use "many" and a political position regarding ems in the same sentence? I think you're overestimating how much attention this topic gets! lol
This reminds me of Iain Bank's new novel about Surface Detail where some societies dream up nightmarish hells for "em"-type entities for all sorts of reasons (religious, sadistic, etc.).
Actually, the threat of something like that makes me think the risk of EM life outweighs benefits.
Perhaps 90 versions of my future workaholic selves will find themselves in bliss, endlessly solving project management and engineering problems and other rewarding experiences (for me to "enjoy" this fate, I'd need serious editing to my reward systems). But perhaps another 10 unlucky versions of end up tortured over and over at high-clock speed for the rest of eternity, with reward and other psychological systems especially modified to maximize the pain and horror of existence.
Is this kind of risk really worth it?
The fact that we can imagine this kind of future is really, really disturbing. I'd much rather have completely alien AIs utterly indifferent to this kind of thing emerge rather than risk highly twisted human / post-human imaginations and motivations from getting any power what-so-ever over em's.
SkyNet would be a kind of salvation if torture of em's becomes feasible and difficult to prevent.
Robin gave his theory here.
Also, I can't understand what your deal is. You seem half troll and half normal.
Slavery is somewhat difficult to define when people are software who live inside hardware; cases 1 and 2 might overlap. In our world, anyone who will be killed by another if they stop working is considered a slave. Morally speaking, this principle should carry over to the future. If an em will simply be deleted by the hardware owner for not working—as might be the case even if 2 holds--that’s slavery. I don’t think it’s analogous to a lazy man starving from lack of money from work because another agent, not nature, is killing the em.
Of course, the technology might work in a way such that the, “lazy man starving from lack of work,” analogy is apt, like if programs have to work directly for power or memory space or something.
I question the implied association between enslavement of ems and torture of ems. These seem like independent issues to me.
If the threat of torture makes for stronger motivation for ems to work (compared to just the promise of continued existence), then absent effective regulation, ems will be tortured even if wages do fall to subsistence and there is no enslavement, because those ems who are willing to voluntarily accept torture in order to increase their own productivity will out-compete and out-reproduce those who aren't willing to be tortured.
On the other hand, if the threat of torture is not a better motivator for ems, then there is no reason to torture enslaved ems. It would be more productive for the slave-owner to just tell them that if they fail to do a sufficient amount of work they will be terminated.
I think you have to merely take (or leave it) as a premise that at some point we replicate human minds in the machine, yet we haven’t the capability to engineer any substantial changes to them
Would ems have a sex drive?
Obviously doing captcha.
When it comes to doing work any machine is only as good and as cheap as its hardware and control system. An 'em' can't be cheaper that its hardware even if the software is free. I'm curious what work you see 'em's doing and what you think the cost of the actual nuts and bolts will be.