13 Comments

I don't think the timeframe is the question. The question is whether it will happen suddenly, or gradually.

Expand full comment

Eliezer, when you say >70% of an 'AI foom' eventdoes your figure take into account all significant events which would halt or set back human technological development? That is, does your >70% figure for the AI takeoff suggest a >>70% probability that our technological development for the next 100 years will not be crippled by any other existential risk?

Expand full comment

A >70% chance of a friendliness-needing event by 2108? As I said <1%, it seems our our disagreement is indeed substantial.

I agree. That's a stark difference.

Expand full comment

A >70% chance of a friendliness-needing event by 2108? As I said <1%, it seems our our disagreement is indeed substantial.

Expand full comment

Next century? Sure, >70%.

Expand full comment

Our largest disagreement seems to be on the chances that a hand-coded version will suddenly and without warning change from nearly powerless to overwhelmingly powerful; I'd put it as less than 1% and he seems to put it as over 10%.

Find it hard to believe that that order of magnitude should make any difference. 1% is an unacceptable probability for an event as dangerous as that. So is 0.1%, come to think of it.

Besides, what margin of error would you apply here? Surely far more helpful (not to mention safer) to say 'there's an appreciable risk of X, and so studying X merits funds and man hours'.

Expand full comment

Is the debate really about whether hard takeoff occurs within the next 100 years with probability .1 or .01? Admittedly, I see value in observing two smart people resolve their disagreements. However, a difference of 9% probability of a key event occurring sometime in the next century doesn't seem that stark a difference to me. Unless Eliezer thinks this event is significantly more probable than 10% -- say, 70% -- this won't be a compelling case study in truth-tracking and, I fear, runs the risk of bogging down.

This is not to say it won't be interesting, but the dramatic lead-up makes this particular disagreement a little anti-climactic. Maybe there's more to it. We'll see.

Expand full comment

The terminology "direct hand-coding" seems odd to me. Does this mean something like the Cyc project? It would be very surprising to me if AGI came about as a result of large scale programming efforts, where one sits down thinks about each module of cognition (stereo vision, planning, object recognition...) and then implements that module in software. Eliezer, is that what you envision?

My belief is that for AI to come about, there needs to be a fairly major scientific revolution. Once that revolution has come, the implementation of AI will be fairly straightforward (though still require a big effort). I think that revolution is on the horizon, I can see the storm clouds brewing, but it's going to take 20 years or so. I don't think that even heroic programming efforts can succeed until the new ideas arrive. People talk about incremental progress building up over many years, but in my view the work being done within current paradigms is achieving diminishing returns.

I agree with Eliezer that whole brain emulation is unlikely to succeed and with Robin that it is unlikely that a hand-coded version will suddenly awake and take over the world (depending on the time scale of "sudden").

My personal fear is that fairly powerful but narrow AI (computer vision, speech recognition, etc) will be achieved in the near term and then used by governments to enslave their populations. The Singularity is a type of risk that has never before been faced but humanity has a chronic problem with totalitarianism. The disasters of the 21st century will probably just be the same as the 20th but with new technologies. I often consider abandoning AI research because of this fear.

Expand full comment

Z.M., I had conditionals in mind.

Tim, yes, an eventual "singleton" mind is a different claim, and in my estimation much more likely, than that it arises suddenly and without warning.

Eliezer, I didn't mean to imply you had originated the hard takeoff concept. But previous descriptions have been pretty hand-wavy compared to the detail usually worked out when making an argument in the economic growth literature. I want to know what you think is the best presentation and analysis of it, so that I can critique that.

Expand full comment

Nick Bostrom and I have essays about the "will there be one" issue. My synopsis: nobody knows yet.

If it happened, whether the route to the one would consist of an extended period of natural selection between corporate superintelligences - or if it would happen rapidly - e.g. as one government effectively took over the world also seems not to be known. We can see large technological shifts looming - in the form of machine intelligence and nanotechnology. They may well provide enough of a power-imbalance to result in an opportunity for such a shift.

These seem like rather separate issues to me. Will we wind up with one big superintelligence? is one issue... and if that happens, will it be a direct descendant of the first superintelligence? is another one. IMO, it is best to consider these issues quite separately - they represent quite different problems.

Corporate competition today seems relatively benign today - but competition between future superintelligent agents might not be.

The possibility of superintelligent agents competing with one another on the planet for their respective futures does seem like a rather worrying one to me.

On the other hand, at least natural selection might help save us from the fate of a sexually-selected species which pursues its own evolution down a blind alley. On this front, it's hard to even know what we should want - let alone what we will get.

Expand full comment

Is another possibility that AI comes out of biotech? Not brain emulation, but study of our own genome leading to algorithms we hadn't thought of, which we then optimize and hand code.

Expand full comment

You give me too much credit. I. J. Good was the one who suggested the notion of an "intelligence explosion" due to the positive feedback of a smart mind making itself even smarter. Numerous other AI researchers believe something similar. I might try to describe the "hard takeoff" concept in a bit more detail but I am hardly its inventor!

Expand full comment

"Our largest disagreement seems to be on the chances that a hand-coded version will suddenly and without warning change from nearly powerless to overwhelmingly powerful; I'd put it as less than 1% and he seems to put it as over 10%."

Are these P(hard takeoff | hand-coded AGI) or just P(hard takeoff)?

Expand full comment