Friendly Teams

Wednesday I described UberTool, an imaginary firm planning to push a set of tools through a rapid mutual-improvement burst until they were in a position to basically “take over the world.”  I asked when such a plan could be reasonable.

Thursday I noted that Doug Engelbart understood in ’62 that computers were the most powerful invention of his century, and could enable especially-mutually-improving tools.  He understood lots of detail about what those tools would look like long before others, and oversaw a skilled team focused on his tools-improving-tools plan.  That team pioneered graphic user interfaces and networked computers, and in ’68 introduced the world to the mouse, videoconferencing, email, and the web.

I asked if this wasn’t ideal for an UberTool scenario, where a small part of an old growth mode “takes over” most of the world via having a head start on a new faster growth mode.  Just as humans displaced chimps, farmers displaced hunters, and industry displaced farming, would a group with this much of a head start on such a general better tech have a decent shot at displacing industry folks?  And if so, shouldn’t the rest of the world have worried about how “friendly” they were?

In fact, while Engelbart’s ideas had important legacies, his team didn’t come remotely close to displacing much of anything.  He lost most of his funding in the early 1970s, and his team dispersed.  Even though Engelbart understood key elements of tools that today greatly improve team productivity, his team’s tools did not seem to have enabled them to be radically productive, even at the task of improving their tools.

It is not so much that Engelbart missed a few key insights about what computer productivity tools would look like.  I doubt if it would have made much difference had he traveled in time to see a demo of modern tools.  The point is that most tools require lots more than a few key insights to be effective – they also require thousands of small insights that usually accumulate from a large community of tool builders and users.

Small teams have at times suddenly acquired disproportionate power, and I’m sure their associates who anticipated this possibility used the usual human ways to consider that team’s “friendliness.”  But I can’t recall a time when such sudden small team power came from an UberTool scenario of rapidly mutually improving tools.

Some say we should worry that a small team of AI minds, or even a single mind, will find a way to rapidly improve themselves and take over the world.  But what makes that scenario reasonable if the UberTool scenario is not?

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • gmlk

    I think the problem is (missing) infrastructure. A uber tool does nothing unless you have the right infrastructure to make efficient and effective use of it.

    Having a longer lever does not give you more leverage unless you also have a strong enough pivot at the right place and enough room to work.

    Slightly related: The Myth of Leapfrogging

  • http://yudkowsky.net/ Eliezer Yudkowsky

    What, in your perspective, distinguishes Doug Engelbart from the two previous occasions in history where a world takeover successfully occurred? I’m not thinking of farming or industry, of course.

  • Tim Tyler

    I don’t think there have been any “world takeovers” in human history – unless you count takeovers by individual genes or memes, or by other species.

    If we are considering the entire history of life, however, there have probably been many more than two “world takeovers”.

  • http://agiblog.net derekz

    I’m not particularly concerned about the “hard takeoff” scenario as a near-term threat, but I’d say that the answer to your question is that analogy is a poor reasoning method. An AGI is not Doug Engelbart; treating them as being so similar that the failure of one implies the failure of another seems unjustified.

    More generally, I wonder if Rationalists should be forbidden to use analogy at all in serious analysis. As a source of inspiration, ideas, possibilities to explore — analogy is great. But it is not a valid inference method.

    However, the fact that AGI is not Engelbart also does *not* imply in any way that it might be successful at doing things Enblebart could not do. That type of reasoning (which I see a lot) is even worse than misused analogy.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, I discussed what influences transition inequality here.

    Derekz, I doubt if any of us know what it would be like to reason without analogy.

  • http://profile.typekey.com/aroneus/ Aron

    I would tend to predict there is always a sufficient diffusion from the concentrated optimizing kernel away towards everyone else, that allows everyone else to possess sufficient capability of predicting, outflanking, and\or otherwise containing the threat of a runaway monopoly, single-point strong AI, nation-state, etc.

    A strong AI organization is going to pass through a series of breakthrough steps before the ideal humanity-crushing paperclip maximizer can be built, upon which the likelihood that all of these breakthroughs can come from a single organization without leakage to the rest of the world seems dramatically improbable. And so long as the rest of the world is within shooting range of the top, their collective ability\power should remain superior.

    Hopefully.

  • Tim Tyler

    To recap, one of the last world takeovers took place when DNA replaced RNA as the primary heritable medium in biology. DNA most-likely got started in a single organism – and then, after a while, all the other organisms on the planet found themselves with no surviving descendants – a genetic takeover.

    Today, new heritable media have arisen – the new replicators. These are responsible for an enormous mass-extinction – and the extinction may well go all the way – until none of the primitive, DNA-protein based organisms which evolution clumsily tinkered together remain – a memetic takeover.

  • http://jamesdmiller.blogspot.com/ James D. Miller

    The small team was the Nazi leadership and the mutually improving tools were their propaganda instruments.

  • Yvain

    Wasn’t Engelbart right? Computers were a set of mutually self-enhancing tools that became very powerful in a very short amount of time, and they have taken over the world. It’s just that most of the innovation came from people other than Doug Engelbart, which considering the non-Engelbart:Engelbart ratio among computer scientists is statistically plausible. The creation of computing technology was a society-wide effort, and considering the resources necessary, it couldn’t have been otherwise.

    To create Industry Tech N+1, you need Industry Tech N, but you also need coal, iron, water-power, workers, food for the workers, land on which to build factories, engineers, and inventors. You don’t just create industrialism in your basement. UberTool can’t spend a century developing industry and then march out of its office to take over the world, because it needs to have the world or a large chunk thereof just to start industrializing.

    Computing Tech N makes Computing Tech N+1 easier to develop, but it’s not sufficient to create Computing Tech N+1. That takes hi-tech factories, thousands of hours of skilled labor, money, and sometimes genius. Invention of the mouse speeds up all future computer tasks, but you can’t leave a mouse in a room overnight and expect it to have written Windows XP when you get up. To go from the mouse to Windows XP still requires tens of thousands of hours of skilled labor, a bunch of money, high tech factories, and a few genius-level insights. That’s why Doug Engelbart working alone couldn’t conquer the world: he had the mouse, but not any of that other stuff.

    Artificial intelligence is different from either of these because once you have a self-improving AI, AI Tech N is both necessary and sufficient to develop AI Tech N+1. You can just leave a self-improving AI in a room overnight and expect it to be a Power when you wake up.

    if you wake up.

  • http://hanson.gmu.edu Robin Hanson

    James, Nazi propaganda tools were useful, at least for making relative gains, but I don’t see that they improved each other much.

    Yvian, but why would a self-improving AI be so much more autonomous than a self-improving tool team?

  • Tim Tyler

    Computers haven’t “taken over the world” yet. Weigh the world’s silicon chips and they are a tiny fraction of the weight of all the human brains in the world – let alone all the animal brains. It’s the same story with memory, sensors and actuators. Machine civilisation is still at the “birth trauma” developmental stage.

  • Grant

    I’ve wondered the same thing in different terms. Most of our advancement stems from increasing specialization. Is it rational to think AI will be different?

    If self-improving AGI is created, will that be different? Humans haven’t developed many tools to make us smarter in general. Sure we have calculators, event-planners, mathematics, and all sorts of things to help us make up for our serious cognitive faults (such as the inability to do complex arithmetic in our heads), but we don’t seem to make much in the way of serious advances in our general intelligence. We need to increase the numbers of us and our specialization to solve more complex problems. If AGI can be self-improving, maybe it can do more with fewer minds?

  • Tim Tyler

    An individual with a concealed mobile phone and a link to an Indian test-solving sweat shop would probably result in an impressive IQ test score. Especially so if you compare against the score of a caveman.

    What are the grounds for asserting that intelligence augmentation has not already enormously improved human intelligence?

    • gwern

      Depends on the test.

      Wechsler-style tests – vocab, simple math, that sort of thing? Sure. I could buy that.

      A Raven’s-style matrices test? Many of them aren’t online with answers; I would be very surprised if an Indian sweatshop (or Google) could tell you the answer with better odds than guessing, unless there’s a high IQ individual there who can solve the matrix, in which case all that’s been done is test a different person. A smart individual may be able to figure out the answer and explain it to the rest, but the rest won’t figure it out even as a group.

  • http://www.spaceandgames.com Peter de Blanc

    Tim, that’s a silly example. You’re comparing one human to a team of humans.

  • Tim Tyler

    That is a fundamental part of the power of intelligence augmentation. Now, each individual human has the collective knowledge and intelligence of the planet at his fingertips – due to improvements in networking technologies.

    The test is not biased – the rules of the test for the caveman are exactly the same: he just has to complete the test within an hour and hand it back in.

  • Will Pearson

    What, in your perspective, distinguishes Doug Engelbart from the two previous occasions in history where a world takeover successfully occurred? I’m not thinking of farming or industry, of course.

    For me the pure amount of testing to adapt themselves to solving a new problem. Animal brains speeded up the rate of testing, and human brains could test testing methodologies themselves and expand upon them and pass them on. But still a huge amount of testing was done.

  • Pingback: AI-Foom Debate: Post 1 – 6 | wallowinmaya