Setting The Stage

As Eliezer and I begin to explore our differing views on singularity, perhaps I should summarize my current state of mind.   

We seem to agree that:

  1. Machine intelligence would be a development of almost unprecedented impact and risk, well worth considering now.
  2. Feasible approaches include direct hand-coding, based on a few big and lots of little insights, and emulations of real human brains. 
  3. Machine intelligence will more likely than not appear with a century, even if the progress rate to date does not strongly suggest the next few decades. 
  4. Many people say silly things here, and we do better to ignore them than to try to believe the opposite. 
  5. Math and deep insights (especially probability) can be powerful relative to trend-fitting and crude analogies. 
  6. Long term historical trends are suggestive of future events, but not strongly so.
  7. Some should be thinking about how to create "friendly" machine intelligences. 

We seem to disagree modestly about the relative chances of the emulation and direct-coding approaches; I think the first and he thinks the second is more likely to succeed first.  Our largest disagreement seems to be on the chances that a single hand-coded version will suddenly and without warning change from nearly powerless to overwhelmingly powerful; I’d put it as less than 1% and he seems to put it as over 10%. 

At a deeper level, these differences seem to arise from disagreements about what sorts of abstractions we rely on, and on how much we rely on our own personal analysis.  My style is more to apply standard methods and insights to unusual topics.  So I accept at face value the apparent direct-coding progress to date, and the opinions of most old AI researchers, that success there seems many decades off.  Since reasonable trend projections suggest emulation will take about two to six decades, I guess emulation will come first.

Though I have physics and philosophy training, and nine years as a computer researcher, I rely most heavily here on abstractions from folks who study economic growth.  These abstractions help make sense of innovation and progress in biology and economies, and can make sense of historical trends, putting apparently dissimilar events into relevantly-similar categories.  (I’ll post more on this soon.)  These together suggest a single suddenly super-powerful AI is pretty unlikely. 

Eliezer seems to instead rely on abstractions he has worked out for himself, not yet much adopted by a wider community of analysts, nor proven over a history of applications to diverse events.  While he may yet convince me to value them as he does, it seems to me that it is up to him to show us how his analysis, using his abstractions, convinces him that, more likely than it might otherwise seem, hand-coded AI will come soon and in the form of a single suddenly super-powerful AI.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Z. M. Davis

    “Our largest disagreement seems to be on the chances that a hand-coded version will suddenly and without warning change from nearly powerless to overwhelmingly powerful; I’d put it as less than 1% and he seems to put it as over 10%.”

    Are these P(hard takeoff | hand-coded AGI) or just P(hard takeoff)?

  • You give me too much credit. I. J. Good was the one who suggested the notion of an “intelligence explosion” due to the positive feedback of a smart mind making itself even smarter. Numerous other AI researchers believe something similar. I might try to describe the “hard takeoff” concept in a bit more detail but I am hardly its inventor!

  • Ian C.

    Is another possibility that AI comes out of biotech? Not brain emulation, but study of our own genome leading to algorithms we hadn’t thought of, which we then optimize and hand code.

  • Tim Tyler

    Nick Bostrom and I have essays about the “will there be one” issue. My synopsis: nobody knows yet.

    If it happened, whether the route to the one would consist of an extended period of natural selection between corporate superintelligences – or if it would happen rapidly – e.g. as one government effectively took over the world also seems not to be known. We can see large technological shifts looming – in the form of machine intelligence and nanotechnology. They may well provide enough of a power-imbalance to result in an opportunity for such a shift.

    These seem like rather separate issues to me. Will we wind up with one big superintelligence? is one issue… and if that happens, will it be a direct descendant of the first superintelligence? is another one. IMO, it is best to consider these issues quite separately – they represent quite different problems.

    Corporate competition today seems relatively benign today – but competition between future superintelligent agents might not be.

    The possibility of superintelligent agents competing with one another on the planet for their respective futures does seem like a rather worrying one to me.

    On the other hand, at least natural selection might help save us from the fate of a sexually-selected species which pursues its own evolution down a blind alley. On this front, it’s hard to even know what we should want – let alone what we will get.

  • Z.M., I had conditionals in mind.

    Tim, yes, an eventual “singleton” mind is a different claim, and in my estimation much more likely, than that it arises suddenly and without warning.

    Eliezer, I didn’t mean to imply you had originated the hard takeoff concept. But previous descriptions have been pretty hand-wavy compared to the detail usually worked out when making an argument in the economic growth literature. I want to know what you think is the best presentation and analysis of it, so that I can critique that.

  • Daniel Burfoot

    The terminology “direct hand-coding” seems odd to me. Does this mean something like the Cyc project? It would be very surprising to me if AGI came about as a result of large scale programming efforts, where one sits down thinks about each module of cognition (stereo vision, planning, object recognition…) and then implements that module in software. Eliezer, is that what you envision?

    My belief is that for AI to come about, there needs to be a fairly major scientific revolution. Once that revolution has come, the implementation of AI will be fairly straightforward (though still require a big effort). I think that revolution is on the horizon, I can see the storm clouds brewing, but it’s going to take 20 years or so. I don’t think that even heroic programming efforts can succeed until the new ideas arrive. People talk about incremental progress building up over many years, but in my view the work being done within current paradigms is achieving diminishing returns.

    I agree with Eliezer that whole brain emulation is unlikely to succeed and with Robin that it is unlikely that a hand-coded version will suddenly awake and take over the world (depending on the time scale of “sudden”).

    My personal fear is that fairly powerful but narrow AI (computer vision, speech recognition, etc) will be achieved in the near term and then used by governments to enslave their populations. The Singularity is a type of risk that has never before been faced but humanity has a chronic problem with totalitarianism. The disasters of the 21st century will probably just be the same as the 20th but with new technologies. I often consider abandoning AI research because of this fear.

  • manwithaplan

    Is the debate really about whether hard takeoff occurs within the next 100 years with probability .1 or .01? Admittedly, I see value in observing two smart people resolve their disagreements. However, a difference of 9% probability of a key event occurring sometime in the next century doesn’t seem that stark a difference to me. Unless Eliezer thinks this event is significantly more probable than 10% — say, 70% — this won’t be a compelling case study in truth-tracking and, I fear, runs the risk of bogging down.

    This is not to say it won’t be interesting, but the dramatic lead-up makes this particular disagreement a little anti-climactic. Maybe there’s more to it. We’ll see.

  • Ben Jones

    Our largest disagreement seems to be on the chances that a hand-coded version will suddenly and without warning change from nearly powerless to overwhelmingly powerful; I’d put it as less than 1% and he seems to put it as over 10%.

    Find it hard to believe that that order of magnitude should make any difference. 1% is an unacceptable probability for an event as dangerous as that. So is 0.1%, come to think of it.

    Besides, what margin of error would you apply here? Surely far more helpful (not to mention safer) to say ‘there’s an appreciable risk of X, and so studying X merits funds and man hours’.

  • Next century? Sure, >70%.

  • A >70% chance of a friendliness-needing event by 2108? As I said <1%, it seems our our disagreement is indeed substantial.

  • manwithaplan

    A >70% chance of a friendliness-needing event by 2108? As I said <1%, it seems our our disagreement is indeed substantial.

    I agree. That’s a stark difference.

  • Cameron Taylor

    Eliezer, when you say >70% of an ‘AI foom’ eventdoes your figure take into account all significant events which would halt or set back human technological development? That is, does your >70% figure for the AI takeoff suggest a >>70% probability that our technological development for the next 100 years will not be crippled by any other existential risk?

  • I don’t think the timeframe is the question. The question is whether it will happen suddenly, or gradually.

  • Pingback: AI-Foom Debate: Post 1 – 6 | wallowinmaya()