69 Comments

I disagree.

Expand full comment

IMO, it is past time to declare that this post has aged badly.

Expand full comment

Horses were existing real organisms at the time. Planes and ornithopters are both artificial. If we had built "horse emulations" before cars, you would have a more relevant point.

Expand full comment

Did you read the book? The premise is that software that can emulate a human mind will be cheap. If that is the case, there will be enormous pressure to copy people who are broadly capable in the economy. Spurs of generally capable individuals can be trained for specialized tasks. The evidence is that there are no farming savants, or driving savants.

Expand full comment

"That seems like a challenge to the idea of division of labor itself."

Not at all - just that those supporting tasks really do all need to be done. Until they can be done by machines, they will continue to be done by humans. When machines can take over, they will. If my claim here is correct then I think it suggests that there won't be a big sudden takeover by mindless narrow AI, because being able to have a computer do some small part of a task, even if it looks like the 'core' part, isn't anywhere near enough to automate the whole task. Narrow isn't enough, you need general AI - which may, as you say, be very widely distributed, does not need to be all contained in one box, but does need to have all the necessary parts, even the boring 'supporting' parts.

Thinking about it, I realize I'm echoing a claim made by Fred Brooks in the software engineering book The Mythical Man-Month. He claims that most of the work involved in writing programs isn't in making them work, but in making them robust - tolerant of a wide range of possible inputs, and usable as part of a larger system. Specifically, his claim is that to write a program that is sufficiently robust in each of these directions takes three times as long as does writing the core functionality, so nine times as long overall, to turn a brittle 'program' into robust, fault-tolerant, usable, useful 'software'. I think he's right, and that the point applies more generally.

Expand full comment

That seems like a challenge to the idea of division of labor itself. Like Adam Smith's pin factory, most economic tasks seem divisible into a small core real task and all the support stuff and that real task can be done much more efficiently when isolated.

I think you point to a critical difference. Modern technical AI is not like an organism in having to take care of all the details of existence and replication on it's own.

I don't see that as competitive with humans, just the opposite. I see it as keeping a barrier between minds and machines that serve them.

Expand full comment

I'm not sure it's so obvious that most tasks really are made up of some small core component, the 'real' task, while everything else is just cruft we have to deal with because we're stuck with all our useless human baggage. It seems more like modern technological systems have to solve most or all of the same problems as do biological systems - fuel, waste, maintenance, growth, planning, adaptation, interacting with others, etc. Perhaps the biggest difference is that the systems we build can be much more widely distributed and interdependent.

The idea that AI can suddenly knock us off our feet, not by doing everything we do but by cutting out some large proportion of our functionality which is not 'really useful', seems quite mistaken to me.

Expand full comment

" Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities."

This is pretty funny. Most of the unprecedented burst of progress you talk about was not an advance in algorithms but instead improved hardware. For AlphaGo they basically used neural network algorithms from the 1990s. Its the hardware that caught up to the software. There was no unprecedented burst of progress...just a sudden realization that the old algorithms actually worked!

So you are arguing that UAI will fail to progress rapidly because that was the case in the past. But UAI failed to progress because the hardware wasn't fast enough which is exactly the same thing that would constrain EMS.

Expand full comment

I think the argument of the book is stupid. Given that Robin Hanson is smart I am not sure why on this he is so stupid.

AI is heading towards greater and greater specialization. EMs are the opposite of that. You want more Idiot savants. Not human beings. Future AI will be extremely good at doing specific things extremely well. Capitalism doesn't want human level AI and therefore it won't get it.

What you will see in the future is AI that is incredibly good at driving cars, another that is good at farming, another that is very good at making brick walls etc. Your EMS and your UAI will get the asses kicked by specialized AI each and every single time.

Expand full comment

I don't think the big rival to your idea of EMs is what you call UAI as much as it's limited or technical AI. The computational cost of emulating full minds would always be much higher than the sort of limited focused intelligence that isn't intentional or mind like. Uploading the entire mind of a Go champion would always take orders of magnitude more computational resources than an intelligence that plays Go just as well but has no mind. Beyond the excess cost just to run entire minds, they also have to be catered to and manipulated and the cost of managing them is much higher.

The only justifications that would place any value on using slave minds rather than mindless intelligent software that I can see are values that might be called Veblen Value or Sadistic Value. Either conspicuous consumption or dominance for it's own sake.

Mindless intelligent software seems like it could be very friendly and person like without there being any hint of a real mind involved. I'd expect to see assistants like Siri or Alexa as a top layer of a stack of technical AI code that works like an expert system in turn using other software. The whole stack doing all sorts of superhumanly intelligent useful things but completely without aspirations, emotions or intentions.

A self driving car might be engineered using EMs. Upload the minds of chauffeurs and run them in cars with cameras and sensors. It's just a lot cheaper computationally and practically to do it with mindless intelligence.

We've just transitioned out of a world that required minds for every task that needed even minimal intelligence. Slaves, servants or domestic animals served the EM role. It seems as though machines with mindless limited intelligence won out (except for Veblen or Sadistic value).

Expand full comment

"The forsaken of your humanity.... But how cruel!" Luckily it is only a cartoon and can easily be overcome.

Expand full comment

"a simulated life on a silicone chip with free will, will go crazy."Indeed...

https://www.youtube.com/wat...

Expand full comment

Yes we can adapt to lots of things, but not everything. Serious violations (from which there is no escape) activate self-cures, which can be quite bizarre and contagious. This is both at the individual, family, clan and national level. The key question is thus what violates and who is the judge? Consensus about that is very "lumpy" and prone to self-deception, but the crazies still go crazy and I propose that a simulated life on a silicone chip with free will, will go crazy.

Expand full comment

Emulation abilities should be more lumpy than UAI. A drugged human is a great emulation of a non-drugged human, but its economic value at work is far less. There isn't much value in exactly emulating worms, so we aren't working much on that.

Expand full comment

This is the classic concept of "alienation." Humans have been increasingly alienated from environments that feel natural and comfortable for many thousands of years, and there is probably much more to come.

Expand full comment

Em teams would be the size that gives max productivity to their task. If I knew what size that was, I'd say so.

Expand full comment