Follow up to: Brain Emulation and Hard Takeoff Suppose that Robin’s Crack of a Future Dawn scenario occurs: whole brain emulations (’ems’) are developed, diverse producers create ems of many different human brains, which are reproduced extensively until the marginal productivity of em labor approaches marginal cost, i.e. Malthusian near-subsistence wages. Ems that hold capital could use it to increase their wealth by investing, e.g. by creating improved ems and collecting the fruits of their increased productivity, by investing in hardware to rent to ems, or otherwise. However, an em would not be able to earn higher returns on its capital than any other investor, and ems with no capital would not be able to earn more than subsistence (including rental or licensing payments). In Robin’s
It's occurring to me that this whole bizarre scenario, which implies that capitalism will last long enough for us to develop "ems", could be read as a Swiftian allegory for how absurd the exploitation of a much more numerous and (in most practical respects) "skilled" working class by a ruling class of capitalists actually is. If this isn't satire, I do wonder where Eliezer's getting his criteria for "historically realistic" :^)
The framework of 'niches' encompasses variation in idiosyncratic specialization, and pure benefits of cognitive diversity. The benefits of specialization and diversity mean that there will be many niches, but will not protect against turnover in those niches.
Yes, em populations might be located on physically isolated hardware as a means of social control, although boxing without monitoring might lead to regrets.
I know how I would keep a large number of computer-managed brains pliable and cooperative: I would keep them disconnected from the rest of the world, and I would establish a virtual reality for them to live and compete in. They might never realize that they were just EMs - they might think they're real human beings, and live perfectly normal lives with plenty of innovation just because of the drive to create.
And then I would unleash a zombie virus upon them and cackle maniacally from my dark throne!
Does this discussion assume that the best team (most productive team) consists of X copies of the best individual (most productive individual)?
This is certainly arguable.
Experts who make predictions, for example - two copies of the best weather predicter in the world will make the same predictions, and be no better than one. A second expert who doesn't predict correctly as often as the best predicter, but who makes INDEPENDENT errors, is the one to add to your team.
The majority of the German population, critically including the German military and SS, did not expect to be targeted in the Holocaust. With their acceptance of Nazi authority, Jewish/Roma/gay insurrection was visibly very unlikely to succeed.
"Apologies for misinterpreting this. I would assume that any EM smart enough to recognise this problem will not seek to rent its hardware or energy supply, but rather, to buy it."
If this is done at the individual level, then its labor will be much more expensive than cheap renter ems, which will outcompete it. As Robin notes in his uploads paper, there will be evolutionary pressures for willingness to create copies who will face poor and risky prospects as renters. However, there will be much weaker pressures to make those ems willing to submit to mass eviction.
"What can't be allowed won't be allowed. If EMs are as smart as the people on this thread then presumably they won't allow Malthusian growth."'Draconian measures,' i.e. an em singleton.
Merging ems would require a deep understanding of the brain, whereas the emulation scenario assumes the lack of such. If you have that level of understanding you also get customized AIs.
If EMs can be forked, could they conceivably be merged?And intuitively, would this be equivalent to terminating one, both, or neither?
>"I don't think your idea of mass slaughter of the inefficient is realistic for the simple reason that we don't do it now."
>It's not my idea: Robin explicitly says that in his preferred scenario cheap ems will rent their hardware and be subject to lethal>eviction. I am critiquing his scenario, arguing that it would be more unstable than Robin seems to believe.
Apologies for misinterpreting this. I would assume that any EM smart enough to recognise this problem will not seek to rent its hardware or energy supply, but rather, to buy it. It will also co-operate with other EMs to pay a large number of Lobbyist programs to lobby government against allowing unrestricted proliferation of newer better AIs.
In other words I'm saying EMs will become GMs. :-)
>"Besides, with an army of AIs, we can expect conquering the universe, almost free energy etc. to come along too.">Malthusian growth with replication times measured in hours or days will exhaust available resources very quickly, and lightspeed >limits restrict acquisition of resources via space colonization to a geometric (cubic) pattern.
What can't be allowed won't be allowed. If EMs are as smart as the people on this thread then presumably they won't allow Malthusian growth.
Lightware:> Wouldn't people start feeling useless and worthless if ems are better at everything?
As an above-average AI researcher who reads Eliezer's posts, I can say with some certainty, yes, they probably will :-)
Wouldn't people start feeling useless and worthless if ems are better at everything?
If ems are based on humans there's no need to worry. All of human history since Neolithic consisted of most humans being abused by the few rich and powerful and taking it. Most slaves, serfs, and otherwise poor people don't even think about overthrowing the system, they just accept their situation and try to get as much for themselves as possible.
There was even an estimate somewhere on Wikipedia that the whole Holocaust cost Nazis a few hundred of their own dead - millions just complied with being genocided. In less extreme situations there would be even less resistance.
We see less of such abuse right now, but mostly because there's such an abundance of almost everything, not because human nature somehow changed. If one day scarcity returns people won't even remember all the equality stuff.
Carl: Ok, but your post was just the spur to my commenting. Robin and Eliezer, consider the question directed to you also. Perhaps this is intended as a public exercise in rationally addressing disagreement, and the subject matter is not the point?
Don: I've been reading O.B. for most of its existence, and have read everything from the very start (and I might as well say here, it's the most intellectually satisfying blog I have ever read: even the comments are mostly worth reading). Yes, I remember Eliezer's digression into QM, but he did eventually tie it back to o.b.
"I don't think your idea of mass slaughter of the inefficient is realistic for the simple reason that we don't do it now."It's not my idea: Robin explicitly says that in his preferred scenario cheap ems will rent their hardware and be subject to lethal eviction. I am critiquing his scenario, arguing that it would be more unstable than Robin seems to believe.
"Besides, with an army of AIs, we can expect conquering the universe, almost free energy etc. to come along too."Malthusian growth with replication times measured in hours or days will exhaust available resources very quickly, and lightspeed limits restrict acquisition of resources via space colonization to a geometric (cubic) pattern.
"If my Alice can't use language 2, then by god, I'll give it the chance to train up."You can't afford to spend as much on training many different em as you can on one (you can spend more by using controlled experiments on copies, etc), and even if you boost an Alice's productivity above the Bobs, that would simply turn the tables (exactly equal productivity is unlikely).
AC, why do you think most people will empathize with sentient AI? We don't kill old people, but so many cultures have killed useless slaves, entire conquered populations, etc. I strongly suspect most people wouldn't consider uploads to be moral objects on even the most abstract non-action-motivating level for a long time - they're "just software", after all.
General point: Please, please, keep talking about this stuff. I often get the feeling that this site is the only place where it is being talked about, and it's very important for humanity. It's not 'overcoming bias' perhaps, but it definitely needs to be talked about somewhere. On the other hand, 'overcoming bias' is also very interesting. Can you display tags on stories more clearly, and perhaps provide separate RSS streams per tag?
Carl: I don't think your idea of mass slaughter of the inefficient is realistic for the simple reason that we don't do it now. We don't kill off old people, or even people who refuse to 'skill up' in the workplace, and yet look at the high economic expense. Many of us do what we can to support people in third world countries despite the fact they provide us with essentially zero economic utility. Therefore, we won't kill sentient AI either, for the same reason: we're not total b*****ds.
If my Alice can't use language 2, then by god, I'll give it the chance to train up.
If you wouldn't kill an old cat as it enjoys its retirement after a lifetime of mousing, how can you suggest killing Alice or HAL?
Besides, with an army of AIs, we can expect conquering the universe, almost free energy etc. to come along too. This may seem like a rather flippant way to introduce 'conquering the universe', but after the first million Einstein2s are created, I suspect the big problem will be inventing sufficiently challenging games to amuse the population while the ships reach the stars...
One possible strategy of subjugation, if it comes to that, might be the use of mythologies such as "the american dream" -- the incredibly unlikely chance that a currently destitute individual (or em) would be able to contribute something uniquely meaningful after a significant environmental change.
Richard & Filpe: re: "stick to the topic". Have you guys been reading this blog for long? You realize, I hope, that Eliezer had a multi-month "distraction" within the last year, on quantum physics.
The blog is not just about human cognitive biases. At the very least, it covers AI as well. And the current multi-post topic of discussion is the Singularity, a possible consequence of AI.
Carl's post is as on-topic as half the posts in the last year have been.