What Core Argument?

People keep asking me to return to the core of the argument, but, well, there's just not much there.  Let's review, again.  Eliezer suggests someone soon may come up with a seed AI architecture allowing a single AI to within roughly a week grow from unimportant to strong enough to take over the world.  I'd guess we are talking over 20 orders of magnitude growth in its capability, or 60 doublings.  

This amazing growth rate sustained over such a large magnitude range is far beyond what the vast majority of AI researchers, growth economists, or most any other specialists would estimate.  It is also far beyond estimates suggested by the usual choices of historical analogs or trends.  Eliezer says the right reference set has two other elements, the origin of life and the origin of human minds, but why should we accept this reference?  He also has a math story to suggest this high average growth, but I've said:

I also find Eliezer's growth math unpersuasive. Usually dozens of relevant factors are co-evolving, with several loops of all else equal X growth speeds Y growth speeds etc. Yet usually it all adds up to exponential growth, with rare jumps to faster growth rates. Sure if you pick two things that plausibly speed each other and leave everything else out including diminishing returns your math can suggest accelerating growth to infinity, but for a real foom that loop needs to be real strong, much stronger than contrary muting effects.

Eliezer has some story about how chimp vs. human brain sizes shows that mind design doesn't suffer diminishing returns or low-hanging-fruit-first slowdowns, but I have yet to comprehend this argument.  Eliezer says it is a myth that chip developers need the latest chips to improve chips as fast as they do, so there aren't really diminishing returns there, but chip expert Jed Harris seems to disagree.

Monday Eliezer said:

Yesterday I exhausted myself … asking … "What do you think you know, and why do you think you know it?" with respect to "How much of the AI problem compresses to large insights, and how much of it is unavoidable nitty-gritty?"

His answer:

The human brain is a haphazard thing, thrown together by idiot evolution … if there were any smaller modification of a chimpanzee that spontaneously gave rise to a technological civilization, we would be having this conversation at that lower level of intelligence instead. … Human neurons run at less than a millionth the speed of transistors … There's no reason to think that the brain's software is any closer to the limits of the possible than its hardware. … [Consider] the manifold known ways in which our high-level thought processes fumble even the simplest problems.  Most of these are not deep, inherent flaws of intelligence. …

We haven't yet begun to see the shape of the era of intelligence.  Most of the universe is far more extreme than this gentle place, Earth's cradle. … Most possible intelligences are not balanced, like these first humans, in that strange small region of temperate weather between an amoeba and a Jupiter Brain. … I suppose that to a human a "week" sounds like a temporal constant describing a "short period of time", but it's actually 10^49 Planck intervals.

I feel like the woman in Monty Python's "Can we have your liver?" sketch, cowed into giving her liver after hearing how vast is the universe.  Sure evolution being stupid suggests there are substantial architectual improvements to be found.  But that says nothing about the relative contribution of architecture and content in minds, nor does it say anything about how easy it will be to quickly find a larger number of powerful architectural improvements!

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    The question “How compressible is it?” is not related to the paragraph you quote. It is simply what I actually happened to be doing that day.

    20 orders of magnitude in a week doesn’t sound right, unless you’re talking about the tail end after the AI gets nanotechnology. Figure more like some number of years to push the AI up to a critical point, 2-6 orders of magnitude improvement from there to nanotech, then some more orders of magnitude after that.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Also, the notion is not that mind design never runs into diminishing returns. Just that you don’t hit that point up to human intelligence. The main easily accessible arguments for why you don’t hit diminishing returns for some time after human intelligence has to do with the idea that there’s (a) nothing privileged about human intelligence and (b) lots of visible flaws in it.

  • Nick Tarleton

    I’d guess we are talking over 20 orders of magnitude growth in its capability, or 60 doublings. This amazing growth rate sustained over such a large magnitude range….

    How meaningful is this? Capability isn’t anything like linear in intelligence, or any other fundamental property whose rapid growth would be really surprising. (Simple example: a boxed AI crossing the threshold of being able to persuade someone to connect it to the Internet. Or, like Eliezer mentioned, the threshold to design MNT.)

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    I don’t understand why visible flaws implies a lack of diminishing returns near the human level.

  • Jeff Beck

    Robin,

    Why do you have Eliezer on this website? You don’t seem to be very impressed about his view of AI. The rest of his postings are on philosophy, and he’s really terrible at that, though in Ayn Rand-like fashion he thinks he’s quite good at it. This website would be much better if you got rid of him.

  • http://www.virgilanti.com/journal/ Virge

    Robin, you keep quoting the “low-hanging-fruit-first slowdowns” but you don’t acknowledge that with directly recursive improvement, eating the low hanging fruit makes you tall enough to reach the higher fruit. That seems to be something that you’re either missing or have unstated reasons for rejecting.

    I suggested a possible reason for your disagreement with Eliezer in:
    http://www.overcomingbias.com/2008/12/two-visions-of.html#comment-142180830
    but for whatever reason, you chose to respond only to the question of religious language.

    Do you think Eliezer is wrong in his estimates of how much a rational agent could derive from extremely limited experimentation?

    Here’s what Eliezer says in http://www.overcomingbias.com/2008/05/faster-than-ein.html

    That even Einstein did not come within a million light-years of making efficient use of sensory data.

    Riemann invented his geometries before Einstein had a use for them; the physics of our universe is not that complicated in an absolute sense. A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis – perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration – by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.

    Robin, do you believe that?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    @Robin: It means you can go on past human just by correcting the flaws. If you look at the actual amount of cognitive work that we devote to the key insights in science, as opposed to chasing red herrings, clinging to silly ideas, or going to the bathroom; then there’s at least 3 orders of magnitude speedup right there, I’d say, on the cognitive part of the process.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    I’m talking orders of magnitude in total capacity to do things, something like economic product, because that seems the simplest overall metric. If the world has ten orders of magnitude of humans, then something that can take over the world is roughly that bigger than a human. And presumably this AI starts as far less capable than a human. If this scenario happens in an em world, there’d be lots more stronger creatures to beat.

    Eliezer, I don’t see how that follows at all. Just because I can tell that a car’s bumper is too heavy doesn’t mean I have any idea how to make a car. You need to make a direct and clear argument.

    Virge, you are talking about how far we can go, I’m talking about how fast.

    Jeff, it case it is not obvious, I respect and like Eliezer, and am honored to share a blog with him.

  • Cameron Taylor

    “This amazing growth rate sustained over such a large magnitude range is far beyond what the vast majority of AI researchers, growth economists, or most any other specialists would estimate. It is also far beyond estimates suggested by the usual choices of historical analogs or trends. Eliezer says the right reference set has two other elements, the origin of life and the origin of human minds, but why should we accept this reference?”

    I accept it because it appears to me that being able to re-engineer one’s own brain and relatively trivially grant oneself hardware improvements that double each year and then clone oneself one hundred thousand times is quite a significant change. I consider it to be more like the change from chimp to human than the change from learning a new fact that I have to teach my children as they grow up (over a period of 20 years).

  • PK

    May I suggest that Robin and Eliezer each write out lines of retreat.

    Robin, what would you do if Eliezer was right?

    Eliezer, what would you do if Robin was right?

  • michael vassar

    “I’d guess we are talking over 20 orders of magnitude growth in its capability,”

    Surely we can take over the world with 20 doublings from initially a few tens of millions in effective assets. Does it really matter if you only get world conquest via 20 doublings over 4.5 months as you have discussed before? I don’t see much difference between scenarios with 1 hr take-off and scenarios with 1 year take-off (the latter of which should be plenty of time for infinite doublings (or out to physical limits) following your growth modes model for both speed of next growth phase and time till following growth phase, but is less than fairly normal first mover to second mover gaps in today’s tech&business environment.

    “Sure evolution being stupid suggests there are substantial architectual improvements to be found. But that says nothing about the relative contribution of architecture and content in minds, nor does it say anything about how easy it will be to quickly find a larger number of powerful architectural improvements!”

    If a stupid process found large and powerful architectural improvements in getting to human intelligence (ask a chimp if it did. or a worm) then that DOES say that it will probably be easy to find others that it missed with a smarter process.

    “Eliezer has some story about how chimp vs. human brain sizes shows that mind design doesn’t suffer diminishing returns or low-hanging-fruit-first slowdowns, but I have yet to comprehend this argument.”

    If you think that he has understood your argument and you have not understood his this should make you very unconfident in your conclusions, especially if other people who you believe to be epistemic peers of yourself and Eliezer think that the point you haven’t understood is cogent and if you don’t think that you have points that he isn’t understanding.

  • http://www.virgilanti.com/journal/ Virge

    Robin: “Virge, you are talking about how far we can go, I’m talking about how fast.”

    No. Emphatically, no. I am talking about both speed and localization. I have no idea what the upper limits to knowledge are, nor are they relevant unless you think humans are anywhere near hitting them.

    Science and engineering, as we know them, are inefficient and resource-intensive. They require cooperation from a broad base of specialists. All your historic examples are plagued by humanity’s poorly designed experiments, lock-ins to erroneous models, misinterpretation of results, miscommunication of concepts…

    Eliezer imagines the possible rate of research if these inefficiencies could be eliminated, and claims that “A Bayesian superintelligence…would invent General Relativity…by the time it had seen the third frame of a falling apple” (see previous comment).

    If he’s right, then knowledge can increase vastly beyond the current human level just by reviewing the current state of human knowledge and designing the minimal set of additional experimentation to select between the best hypotheses. (The question of what performance one could expect from a sub-optimal quasi-Bayesian intelligence is at the moment unanswerable.)

    If he’s right, then the reliance on lots of incremental improvements from a broad base of cooperating/competing intelligences is no longer a limiting factor.

    Robin, do you think Eliezer’s claims about the capabilities of a Bayesian superintelligence are at all reasonable? If not, what would limit its progress?

    Eliezer, have Robin’s arguments made you change your expectations of the capabilities of a Bayesian superintelligence or the likelihood of approximating one?

  • Cameron Taylor

    “I don’t understand why visible flaws implies a lack of diminishing returns near the human level. ”

    Imagine, in principle, how Eleizer’s productivity would change if he didn’t have to spend years Overcoming Bias before he got down to real FAI work!

  • Grant

    Why does it matter if the AI in question is “one” mind or many specialized minds? We are talking about rapid growth that will quickly outstrip that of humanity in any case. I don’t see that it matters if an AI could take over the world in two weeks, or a group of AIs could do so in 20 years. An outsider might see the actions of the AIs as being done by one mind, many minds, or a mindless machine. Does it matter?

    Robin says many small AIs would have to share norms to work together, and so would have to respect property rights in order to preserve the institutions that make their society possible. This is exactly what I’d expect too. However, the transaction costs of networked AIs is likely much less than that of non-networked human minds, so coordination problems are lessened. Though very low transaction costs are a great thing for an AI society in general (as they might nearly eliminate externalities), I’m not sure they are such a good thing for humans depending on a coordination problem for survival. Eventually wiping out humanity might become the economical choice to procure the resources used by humans.

    Though I’m much more afraid of an AGI created to be a friendly god than I am of one created for almost any other purpose.

  • Emile

    Robin: I don’t see how that follows at all. Just because I can tell that a car’s bumper is too heavy doesn’t mean I have any idea how to make a car. You need to make a direct and clear argument.

    I’ll leave the direct and clear argument to Eliezer, but if you see a (functional) car build by blind people, and it’s obvious to you that it’s bumper is too heavy (even if you never saw another car before) – than it’s a good bet that once sighted people can make a car as good as that, they’ll also be pretty close to making better ones too.

    Seeing flaws in mind design shows that we humans aren’t stuck in a local maximum in mind-space (even though we might be in a local maximum in evolutionary fitness-space). So there’s no reason to expect something searching mind-space to slow down around the “human mind” point.

  • Emile

    Grant: Though I’m much more afraid of an AGI created to be a friendly god than I am of one created for almost any other purpose.

    Really? Would you care to give your reasoning?

  • billswift

    I don’t know about Grant’s reasoning, but I value freedom **much** more than safety.

    “You can have peace or you can have freedom, don’t ever count on having both at once.”

    We have grown up in a world where both were the “natural” condition, but this has been a historical exception. The past few decades have seen the increasing growth of supression. “Friendly” AI will only increase that tendency, possibly too strongly to effectively resist. At least this blog has convinced me of the need to strongly work for IA.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    PK, if Eliezer was right about very rapid local AI growth, then we’d need to move on to the issue of why would developers so severely misjudge their control of it. If he was right about that I’d want to tell potential developers their error as clearly as possible.

    Michael, it being easier does not say it is easy.

    Virge, the question is not what a super-intelligence can do, but how easily it is to create one.

    Cameron, you can’t assume the AI has no biases.

    Emile, “pretty close” doesn’t say much about rates.

  • Tiiba

    “””Robin,

    Why do you have Eliezer on this website? You don’t seem to be very impressed about his view of AI. The rest of his postings are on philosophy, and he’s really terrible at that, though in Ayn Rand-like fashion he thinks he’s quite good at it. This website would be much better if you got rid of him.”””

    I’m gonna have to go ahead and disagree with you there. I only come here for Eliezer’s posts. Everybody else, even Robin, is a toolbag compared to him. It’s almost annoying how consistently he hits the nail on the head.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    *I* read Robin’s posts…

  • Grant

    Emile,

    The creation of a “friendly god” already seems to presuppose that it knows whats best for us. Historically, the most violence has come from leaders operating under the pretense of doing some sort of good (either for certain groups or everyone).

    I know Eliezer and others are thinking very hard about how not to create an uber-tyrant, but even if they succeed I have my doubts that the financiers of such a massive AGI project (and it would have to be massive, if they hope to beat any would-be competitors to the punch) would make the best decisions. I suppose the same could be true of AGIs made for other purposes, but its hard for me to imagine them wielding the same sort of political power.

    Of course, if Eliezer is right about the potential of AGI, we’ll likely have an arms-race on our hands anyways.

  • James Andrix

    It’s easy for people to decide that they have a good model of intelligence, and AI is right around the corner, as soon as they get their code working, or as soon as computers get a bit faster.

    Could Elizer’s view be a generalization of this? If we just had a self modifying AI, then Superhuman AI is right around the corner, and if we had a superhuman AI then SUPERsuperhuman AI is right around the corner, and so on.

    Elizer says (IIRC) people expect AI from this or that because they don’t understand how hard intelligence is. Eliezer knows that he doesn’t know just how hard Super-Superhuman AI is, but he thinks Superhuman AI is enough to get there. This seems inconsistent to me.

    That we don’t have AI already seems to be evidence that Intellignce might not have that large an advantage over evolution in mind design. (whereas it has a bigger advantage in designing pumps, flying things, cameras, projectile launchers.)

  • http://zbooks.blogspot.com Zubon

    That we don’t have AI already seems to be evidence that Intellignce might not have that large an advantage over evolution in mind design.

    Define “that large”? Intelligence has been on the project for something approaching a century. Evolution has had multicellular life for about a billion years on this planet. Perhaps that is what you mean: intelligence may not be 10,000,000 times as quick. Many of us will be disappointed if intelligence turns out to be 1,000,000 times as quick, leaving us to wait most of a millennium.

  • http://profile.typekey.com/EWBrownV/ Billy Brown

    Interesting that this discussion bogged with so little progress – this doesn’t exactly inspire confidence in the ability of would-be rationalists to resolve complex issues through discussion.

    For what it’s worth, I think the key questions of fact here mostly revolve around the issue of what a mind with human-level intelligence would look like. Eliezer is apparently of the opinion that an AGI is a complex system of specialized modules, where some modules (like vision) do complex but well-defined processing with O(N) to O(logN) performance, while others are best viewed as polynomial approximations of various NP-complete search problems. In this view it’s obvious that once the AGI becomes competent to write AGI code it can rapidly scale up to use any available hardware, and a lot of his other claims about the capabilities of such an AGI rest on fairly sort chains of inference.

    Robin obviously doesn’t hold this view. From his writings to date I can’t tell if he has a different model in mind, or if he just considers the whole question unanswerable at this point. But this seems to be the most fundamental technical issue in play.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    More on architecture vs. content and hard takeoff as a specific technical problem. It might be wrong to think about seed AI as a content-producer, that makes putative stuff like economies do, where you can see improvements and technologies as goods. People project abstractions on the world around them, if they do something, they usually optimize for an abstraction that is statically attached to that thing. You make a processor, a thing that satisfies an abstraction of processor. You perform an operation described by an abstraction on other abstractions. This style of development is itself a specific algorithm of rationality, this is what works for us, this is what we are capable of doing. Economic analysis of this process is static analysis of this algorithm, a specific system operating by more or less simple rules.

    If AI starts to invent algorithms for its own cognition, and it scales not by copying little black boxes and integrating them in the old algorithm of economy, but by expanding its mind, then you are in trouble. The activity of AI consists not in putative actions that produce stuff, it consists in following whatever cognitive algorithm previous incarnations of that AI came up with. External activity of the AI is as much operation of its mind as its internal activity, and its mind doesn’t work on economy, it works on novel algorithms optimized for each specific context. Performing static analysis of this algorithm isn’t going to yield simple laws, apart maybe from what physics, information theory and computational complexity can say on the topic, and this is orders of orders of magnitude beyond what we saw.

  • advancedaltruist

    I’m gonna have to go ahead and disagree with you there. I only come here for Eliezer’s posts. Everybody else, even Robin, is a toolbag compared to him. It’s almost annoying how consistently he hits the nail on the head.

    x2

  • http://profile.typepad.com/6p010536535125970b/ Thomas Nowa

    @Jeff,and @Tiiba: I couldn’t disagree with you either of you more.

    Why would you value debate where both sides are in agreement? Only through disagreement and discussion can true debate take place. The contrasting views of Robin and Eliezer are what make this blog thought-provoking and worth reading.

  • James Andrix

    Zubon:
    I think we’re on the same page. I meant ‘that large’ in comparison to our advantage in other design spaces. We went from kites and balloons to spaceplanes in under 100 years, and we’re even better than that at microprocessor design. We could take this to mean that intelligence is very good at these things, but maybe not so good at improving machine vision algorithms (but still much better than evolution).

  • Tiiba

    Now, I’m not saying that Robin is stupid. It’s just that Eliezer is so amazing.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Surely, impact is measured not in how many partisans you started with, but in opinion shifts.

  • Tim Tyler

    Why would developers so severely misjudge their control of [superintelligent growth]

    Most of today’s developers don’t worry much because they don’t have to – the chance of any one of them creating a superintelligence soon is miniscule. The job of speculating on what might happen 20 years or so in the future is one for philosophers, not coders.

  • Pingback: Overcoming Bias : Distrusting Drama

  • Pingback: AI Foom Debate Conclusion: Post 50 – 52 | wallowinmaya