19 Comments

If the world's best human AI researcher produces insights about AGI at the rate of H insights per week, how many insights per week should we expect from the first computer-based AI researcher that's superior? Even if intelligence insights aren't chunky, as Robin suggests, I wouldn't be surprised to see the first computer-based AI researcher that's better than the best humanity has to field producing 10H or 100H insights per week. This could be a very significant first mover advantage if we expect an unconstrained intelligent reasoning architecture, like an economy, to improve its capabilities in an exponential fashion.

Expand full comment

@Adam A. Ford: Thanks!

Expand full comment

I meant to write:

Using the right cement, instead of the wrong one, is NOT a “huge architectural innovation”

Expand full comment

Robin writes:

"an implementation trick or an “integration” approach that makes the difference from complete inability and full ability to run “the” key “algorithm for intelligence” counts as a huge architectural innovation."

No it doesn't. When I think of huge architectural innovations, I think of projects like building the Brooklyn Bridge or the Space Station.

But amazing things like the Brooklyn Bridge and the Space Station can be frail. They can have key vulnerabilties, and it doesn't take huge architectural innovations to correct those frailties and vulnerabilities. For example, if the Brooklyn Bridge used the wrong kind of cement, it could have collapsed on the spot. Using the right cement, instead of the wrong one, is a "huge architectural innovation" - it simply putting the last piece of the puzzle together that was already almost completed.

Similarly, if you take oxygen out of the Space Station, humans can't breathe there and will die, making it useless for its intended purpose. Add oxygen back, and it now it works perfectly. Oxygen is not a "huge architectural innovation", but a lack of oxygen ruins the whole system.

Fast-conquering AI could be just like that. That's the whole idea, practically, behind the intelligence explosion.

You also write: "If hardwire cost were the limit then many competing teams could do it as hardware became cheap enough."

But this doesn't contradict my point. Reread the beginning of my comment. I argued that P1 and P2 can still be true, but C doesn't follow. You quote immediately above doesn't touch upon my argument. If it is supposed to, it's not clear how.

Expand full comment

Here is the video of the debate up on youtube. I found it took me a fair while to download the whole thing from the link at the top of this article before watching, it may be easier to watch it streaming on youtube.

Expand full comment

I find some of Robin's arguments plausible, but my mind keeps drifting to a hypothetical debate between members of a Homo erectus tribe in the past. One erectus argues that it is possible for one tribe of erectus to evolve new cognitive abilities that will allow them to conquer the world, killing all the other erectus tribes in the process. Another, whom we might call Hanson erectus, argues that this is unlikely, that instead erectus around the world will all evolve into Cro Magnons at once, and that one tribe will certainly not be able to take over the world.

Maybe AI development in the industrial globalized world is qualitatively different from evolution among hunter gatherer tribes, and for that reason Robin's objections hold. But I'm not at all sure that that's the case.

Expand full comment

I don't have a clear idea who "won" this debate (don't know the relevant field too well) but one strong argument in favor of Eliezer is that the brain just isn't that complex in terms of bits of information. I've seen one estimate that the human brain stores roughly a terabyte. Since we have "small" brains that can do many things, isn't it likely that many of our cognitive abilities must share the same machinery or proceed along the same algorithms? There has to be some intrinsic versatility, so that our brains can even fit inside our skulls.

Expand full comment

The ideal architecture for artificial intelligence is the neural architecture of the human brain -- if only somebody could and did figure it out. You were right to be contra the idea that intelligence explosion first-movers will quickly control a much larger fraction of their new world. As such a first-mover, Mentifex here can report that the creators of the AI will have almost no say in how the new AI is put to use.

Expand full comment

I think it's pretty clear that there are only two key insights for AGI:

(1) cross-domain integration is the first key insight, and it's all done with a small set of pre-defined ontological primatives (27 to be exact), which are used as prototypes or templates for categorization, enabling the generation of novel insights via defining reference classes and forming analogies between different domains (Guy with a clue: John Sowa),

and...

(2) An information theoretic definition of 'beauty' in terms of minimal complexity, enabling unified representations of our goals in terms of narratives. (Guy with a clue: Juergen Schmidhuber)

It's the combination of (1) and (2) that will produce the intelligence explosion, and these two insights could indeed be done by a small team or even a single person...like..er...me? ;)

Expand full comment

How surprised would you be by a few architectural insights leading to automatic mathematicians which could outcompete humans? How much content do you think humans have acquired to facilitate mathematics, or simple symbolic games? What do you believe about the relationship about such narrow capabilities and "general intelligence"?

I would not be surprised by a single architectural change leading to huge improvements in this area, and similar considerations are important to my stance on the question you are debating.

Expand full comment

Kip, an implementation trick or an "integration" approach that makes the difference from complete inability and full ability to run "the" key "algorithm for intelligence" counts as a huge architectural innovation. If hardwire cost were the limit then many competing teams could do it as hardware became cheap enough.

TGGP, I didn't intend a dig, and didn't have that in mind, but yes there is an implicit critique.

Patri, if you assume only one team has access to a much "smarter-than human AI" which can see things that no number of human level minds working together can see, then you've already assumed a huge team asymmetry, which is what is at issue. I don't follow your computing power example - is your super AI building its own entire computer hardware industry?

Expand full comment

The key issue is: how chunky and powerful are as-yet-undiscovered insights into the architecture of “thinking” in general (vs. on particular topics)?

I don't understand why they need to be chunky (each is big). Isn't enough for there to be significantly more powerful insights available to more intelligent researchers?

We might imagine that there are "mines" of insights at various levels of researcher intelligence, that we've found all the best insights available at human-level research already, but that a smarter-than human AI would have access to new mines such that the highest quality mines quickly give a set of new insights which are powerful enough to open access to new mines...

Note that the advantages of silicon and self-modifiable code over biological brains do not count as relevant chunky architectural insights — they are available to all competing AI teams.

But advantages of silicon & self-modifable code related to differences in researcher intelligence are *not* available to all competing teams - only to the first team which has found smarter-than-human AI to assist. In the above example, if the "mines of insight" are insight into developing faster computer hardware or better thinking procedures (or anything else that results in smarter AI), that still gives the first movers an advantage.

I also find it somewhat odd that you focus on rates of growth in these past population changes but seem to neglect them for intelligence, when the arguments for intelligence explosion are also based on growth. Suppose that AI smarts are simply proportional to computing power, but that the growth rates of computing power depend on researcher intelligence. If insights on improving computing power are independent & stackable, then while world computing power grows at 59% annualized (2x every 18 mo), perhaps the first mover's will grow at 69% annualized (an extra 10% from the help of their AI). Different fixed exponents results in wildly diverging long-term performance, and if the relative difference widens over time (because it depends on absolute performance), then it diverges even faster.

This doesn't get us an explosion in weeks, but isn't anything where output feed back into a higher growth exponent very unstable? Seems to me that it depends mostly on things like whether AI insights can be kept private and whether they "stack" with global growth.

Another growth potential comes from total financial assets, which seem like they could grow very quickly from having the best AI. Suppose arbitrages are a tournament won by the best participant - the best AI might be able to make money extremely quickly day-trading, using the money to add computational resources which both make it smarter and enable it to remain best at finding new arbitrages. This would lead to an explosion in control of world resources with no insights on thinking, solely from AI intelligence being proportional to computing power.

Expand full comment

so did 20 people walk out or just abstain from voting at the end?

Expand full comment

The Less Wrong thread discussing the debate: http://lesswrong.com/r/disc...

Expand full comment

Your remark about building a city with better architecture out in the desert sounds like a bit of a knock against Paul Romer's charter cities. But I think he's trying to compete with third-world cities rather than New York.

Expand full comment

This sort of sponsorship raises some questions in my mind about your independence and what other ties you have to big finance.

Expand full comment