AI In Far And Near View

Looking far into the distance, your eyes often see a sharp boundary between earth and sky. But if you were to travel to that furthest part of earth your eye can now see, you may not find a sharp boundary there.  Far mode simplifies, not only suppressing detail, but making you think detail is unimportant.  If you saw two ships battling on the horizon, you’d be too tempted to expect the bigger ship to win.

From a distance, future techs also seem overly simple and hence disruptive.  If in 1672 you had seen Verbiest’s steam-powered vehicle, you might have imagined that the first nation with cheap capable cars could conquer the world.  After all, they might build tanks and troop transports, and literally run circles around enemy troops.  But while having somewhat better cars did sometimes help some nations, it was far from an overwhelming advantage. Cars slowly gained in cost, ability, and number; there was no particular day when one nation had vastly more capable cars.

Similar scenarios have played out for a great many techs, like rockets, radios, lasers, or computers.  While one might imagine from afar that the difference between none of a tech and a “full” version would give a dramatic advantage, actual progress was more incremental, reducing team differences in tech levels.  Overall differences in wealth and tech capability were usually better explanations for the advantages some nations had over others.

The first far images of nanotech were also simple, stark, and disruptive.  They imagined one team could quickly and reliably assemble, from cheap plentiful feedstocks, large quantities of a large set of big atom arrangements, while other teams had near-current capabilities.  In this scenario, the first first team might well conquer the world, or accidentally destroy it via “grey goo.”

The nanotech transition seems less disruptive, however, if we see more detail, and imagine a series of incrementally more capable assemblers, able to build larger objects, faster, more reliably, from more types of feedstocks, using more kinds of local chemical bonds, at a wider range of assembler-assembled angles, and so on.  After all, we already have ribosome assemblers, with a very limited range of feeds, bonds, angles, etc.  Each new type of assembler would lower the cost of making a new class of objects.

Far images of artificial intelligence (AI) can also be overly stark.  If you saw minds as having a single relevant “intelligence” parameter, with humans unable but machines able to change their parameter, you might well rue the day a machine whizzed past the human level.  Especially if you thought God-levels might follow a month later, and if you thought this parameter’s typical value was what determined a team’s power.

However, if you saw the power and growth rates of teams (or societies) as depending on dozens of parameters, including dozens that contribute to the aggregate we often call “intelligence,” you might foresee a less disruptive transition.  Relevant parameters might include many kinds of natural resources, physical capital, social capital, crossroads, standards, computing hardware, memory hardware, communication hardware, data, skills, knowledge, heuristics, reasoning strategies, etc.  The more such parameters are relevant, the harder it is to expect a small team to suddenly improve greatly in enough parameters to overwhelm other teams.

Growth in the power of any team or society has long depended heavily on the growth of all other teams.  Back when humans competed with other species for similar ecological niches, each species improved mainly internally, as species had few ways to learn from each other. So the species whose capabilities grew fastest was bound to displace the others.  But the first human societies to achieve farming did much less displacing of other societies – people could learn farming from neighbors.  With the arrival of industry, not only did other societies copy industrial methods, but the division of labor forced the first industrial cities to share their gains with non-industrialized trading partners.

Long term growth has consisted both in steady gains in many relevant parameters, and in switching some parameters from constants to parameters than usefully change.  For example, while hunters improved the stories they told each, the number of stories each hunter could remember was fixed.  After the introduction of writing, however, we’ve had a steady increase in the number of stories we can each remember.

While we can today in principle mechanically change many features of how our brains are organized, in practice we don’t know how to make useful changes, and so such organization parameters are effectively fixed. Computers can also in principle change their own software organization, but they also do not in practice know how to do this usefully. Computer organization does usefully change, but only because humans change it.

The more of our data, skills, knowledge, heuristics, reasoning strategies, etc. we embody in non-human hardware, rather than in human brains, the more advantage we will gain from our ability to usefully change such hardware organization. This will in effect move more of the relevant parameters that describe our power from the category of constants to the category of steadily improving parameters.

We might usefully model our total growth system as dozens of changing parameters, with many dozens of feedback connections between pairs of these parameters, some connections positive and some negative. The overall growth rate of such a complex system could in principle accelerate faster or slower than exponential, and when a new parameter entered such a system, switching from fixed to changeable, the feasible growth rate and acceleration of the entire system could in principle change.

But empirically it seems that our total system has usually grown exponentially at a constant non-accelerating rate, even as many new parameters have switched from fixed to changeable.  Only rarely (thrice in ten million years) has any novelty substantially changed our growth rate.  So it is unlikely that adding any particular new parameter to the current changeable set will change growth rates.  It is also not obvious that many relevant parameters would at the same time enter the set of usefully changeable parameters.  For example, while a transition to whole brain emulations will simultaneously make it mechanically cheaper to experiment with many brain organization parameters, it could take much search to find ways to make useful changes in each parameter. Different parameters may require very different amounts of search.

Even so, based on historical patterns, I expect that within the next century or so one newly changeable parameter will be a rare pivotal one, that knocks the whole system into a faster growth rate.  But I also expect the system to grow quite a bit before another such knock arrives.

When that big knock arrives, a key disruption question is whether a single small team, initially a tiny fraction of world power, could not only find a way to make that key pivotal parameter usefully changeable, but also keep exclusive control over that ability for long enough.  That is, could an initially weak team find and exclusively hold this new ability to grow internally so much as to be able to overwhelm the entire rest of the world, doing so quickly and stealthily enough to avoid retaliation or conquest by others in its early weak period?

Such a scenario is possible, but based on the considerations raised so far in this post, it seems rather unlikely.  Someone might show us details of how upcoming newly changeable parameters actually interact with other important parameters, and overcome this initial presumption; but until they do this should be our best estimate.  Yes the first humans did something similar, but the first farmers and industrialists did not.  And we understand why: as more parameters have entered the changeable set, teams have found more ways to learn and copy from each other, and we have become more dependent on one another via a more elaborate international division of labor.

In sum, as we move more of our data, skills, knowledge, heuristics, reasoning strategies, etc. into non-human hardware, that will change the aggregate “intelligence” of that hardware, and raise our gains from improving the organization of that hardware.  We may do this via ordinary software, via special “general artificial intelligence” software, via whole brain emulations, or something else.   This change will in a sense add to the set of changeable parameters in our system of dozens of interdependent parameters. While each such added parameter is unlikely to change our overall system growth rate, one such change probably will.  But because of greater info sharing and specialization, a single small team seems unlikely to hold and use this change internally enough to overwhelm the rest of the world.

GD Star Rating
loading...
Tagged as: , , , ,
Trackback URL:
  • Robert Koslover

    An excellent essay, Robin. I suspect that the impact of any game-changing technology in AI will depend substantially on who gets it first. The US was the first to develop nuclear weapons. We promptly used two, then stopped. Had the axis powers won that particular technological race, history since then would be very different. And not in a good way, in my humble opinion.

    • ad

      The US was the first to develop nuclear weapons. We promptly used two, then stopped.

      IIRC, in 1945 the US only had two available to use. And it used both of them.

      And I don’t think it had many for some years afterwards. A handful of kiloton-range bombs do not convey worldwide omnipotence.

      For purposes of comparison, the RAF and USAAF dropped ~ one million tons of bombs on Germany in the last year of the war.

  • http://don.geddis.org/ Don Geddis

    I immediately thought of nuclear weapons too. If America in 1945 had been run by Alexander the Great, the world wouldn’t have turned out so nicely for most people.

    (That said, as far as AI goes, I agree with Robin’s essay. It seems unlikely that a single weak team would be able to progress all the way to world domination in isolation, without rival teams of humans being able to copy the technology.)

    • gwern

      Nuclear bombs aren’t the only example where one nation suddenly obtained a commanding lead, although they are nice in that they are pure technology and not tactics or strategy as well.

      For example, the Mongol conquest of most of the world; their bows, stirrups, and calvary tactics destroyed any nation hapless enough to be in their path. Or to take Robin’s car example, the German blitzkrieg was briefly pre-eminent.

  • http://www.weidai.com Wei Dai

    Suppose brain scanning is ready last and my team is the first one to find the last piece of puzzle. I now own the first emulated brain and have a computing cost of perhaps $0.01 per hour of emulation.

    Do you think I can’t overwhelm the rest of the world even in this scenario, or do you think this scenario is unlikely? Or something else?

    • http://hanson.gmu.edu Robin Hanson

      Details are crucial. Your cost figure must depend on a quantity – how does it depend? How close is the second team to completion? What control do your investors have over your venture? How well can suppliers of computing hardware and other crucial inputs to using your ems coordinate? You could get and deserve a very handsome revenue stream, but it takes more than that to overwhelm the world.

      • http://www.weidai.com Wei Dai

        I think I see what you’re getting at… if I can’t obtain enough specialized emulation hardware to quickly scale up my operations, my competitors can perhaps reproduce my breakthrough in time to catch up.

        But what if brain scanning came so late that I can get $0.01 per hour of emulation on general purpose, off-the-shelf computing hardware, so that I have essentially no scaling constraints? That would let my team take over the world, right? Let say that the second team is 3 months behind me. And I’m not sure why you ask about investor control… what difference does it make?

      • http://www.weidai.com Wei Dai

        And of course the first thing I’d do is to throw as many emus as I can into the task of shutting down or delaying my competitors. I can run tens of thousands of emus for the cost of one human security consultant, so it’s hard to see how to defend against that kind of attack.

      • http://hanson.gmu.edu Robin Hanson

        Wei, you’ll need to secure access to the resources you will need to run all those ems, and do so against attempts by others to coordinate to deny you access to those resources. The more that your development is a surprise, and the more prepared you are, the better a chance you have to achieve a surprise takeover.

        On investors, if an em transition is anticipated enough, we should expect very large amounts of capital to be collected behind teams that attempt to achieve the first em. These collections will of course be very concerned to maintain control of those teams. Even if one team wins and overwhelms the others, the size of the community behind that team would not be small.

      • http://www.weidai.com Wei Dai

        Robin, I don’t think we really disagree much here. I mainly want to establish that there is at least one plausible scenario where a takeover of the world is possible, in which case it has to be a significant component of one’s expected utility computation (cf. Pascal’s wager), especially for those who consider the ultra-competitive Malthusian outcome to be of little utility.

        The division of control between investors and the core team obviously depends on the relative value of capital vs. insight, which I don’t have much to say about. But I imagine if the first upload has already been created, and the only thing needed to take over the world is more capital to purchase large quantities of general purpose, off-the-shelf computing hardware, I can quickly obtain additional capital at a very low cost (i.e., billions of dollars of investment for a tiny share of control over the venture).

    • Lightwave

      The thing is, the first “useful” emulations are probably going to be very crude, buggy and have a number of issues (e.g. tend to go crazy after a few hours and you have to reset the program, or whatever). They will still be useful, but not vastly more efficient than humans, only a tiny bit more so, and in some specialized cases. Even if this is the case, it will still be economically advantageous to build them and use them for the minor gains. But you won’t be able to take over the world with these, and the incremental improvements will come gradually over the next years.

  • http://pancrit.org Chris Hibbert

    @Wei Dai, It seems like most engineering consists of finding a good enough solution first and better solutions shortly thereafter. Look at how quickly many touch screen smart phones came out. Someone was first to market, but they didn’t have a big enough lead time to make much of a difference. Most of the time, it’s like that.

  • Aron

    This is very well said. Evolution developed systems more intelligent in certain ways than itself, and humans have done that as well with their institutions and tools, but i think in both cases, significant expenditures into trial and error are required.

    Yes, a human can do better than select random changes in designing a system, but this does not obviate the need for trial and error to validate the changes. This requirement then pegs the rate that intelligence can be developed to the amount of resources devoted to it, since each trial carries a certain expense. Also, I would stipulate that intelligence is a domain-specific capacity. Generality is an illusion: our minds are amazing at visual tasks and ridiculously poor at simple math and there is little evidence that improving one would improve the other by necessity. Therefore, the more you want an intelligence to accomplish, the more design work is involved, which requires more trials and resources to achieve.

    And then historically we can say that problems that require lots of resources tend to be solved *first* in a distributed fashion for various straightforward reasons.

  • http://timtyler.org/ Tim Tyler

    What we can see is that tech allows rapid growth and large eventual advantage. E.g. see HP, Apple, Microsoft, all of which exploded from two-man garage operations.

    So far – AFAIK – no company has overthrown their associated government – though some have expanded beyond them.

    Governments have anti-trust measures designed to keep companies in check. They have secrecy orders – designed to make sure they have the best important tech. If a company develops potentially world-beating tech they may find themselves rapidly nationalized.

    The question then becomes: could a big government take over the world? The thing is, big governments are *already* taking over the world. We have huge mega-governments sprawling across the major continents, absorbing their neighbours into enormous unions. If we have a Eurasian union by the time we get machine minds, a big government will have *already* taken over most of the world.

    It seems highly unlikely that such an organization would be ruffled by better machines arising from rebels within its borders. It would just assimilate the machines and add them to its arsenal.

  • http://www.rokomijic.com Roko

    I am beginning to hope that Robin is mostly right about ems coming first, and about the process being sufficiently gradual that the major political groups at the time share the spoils roughly equally.

    The other extreme of possibility: local hard takeoff of recursively self-improving software AI, seems to almost inevitably lead to an extinction event.

    I think that the probability that Robin gave of “less than 1%” for the latter scenario is a comforting thought, but I don’t think that reliance on historical data about events that are “vaguely similar” to the AI transition can ever justify that kind of confidence. AI is not farming, they are different, and only a limited amount of evidence is available from such analogical reasoning.

    • http://hanson.gmu.edu Robin Hanson

      I don’t recall publishing an estimate such a low probability estimate. I certainly agree the probability is high enough to justify a large effort to avoid that bad scenario. (Even a 1% chance is enough.)

      • Roko

        Hmm, I am sure that you said that. Oh well, I take it back. But if you say “the probability is high enough to justify a large effort to avoid that bad scenario” I certainly agree, but it seems more important to focus on the more easily winnable em scenario as far as “activism” goes.

      • http://www.bayesianinvestor.com Peter McCluskey

        That 1% number was for a similar but not identical scenario:
        http://www.overcomingbias.com/2008/11/setting-the-sta.html

      • http://hanson.gmu.edu Robin Hanson

        I stand corrected.

  • Corey Newsome

    The only part that matters: AI reaches critical point, gets smarter, uses extra smarts to get even smarter, repeat. Think nuclear fission, not agricultural revolution. Taking an outside view of a scenario doesn’t work when you don’t use the right reference class. The creation of a really powerful optimization process, like nuclear fission, is a phenomenon outside the realm of economics. This is not ‘technology’: technology is a tool we use to enhance our smarts. Creating smarts themselves is whole ‘nother realm.

    Or, in the esoteric and nearly dead language of ‘Laugh out loud: feline!’:
    TEH ONLY PART DAT MATTERS: AI REACHEZ CRITICAL POINT, GETS SMARTR, USEZ EXTRA SMARTS 2 GIT EVEN SMARTR, REPEAT. FINKZ NUCLEAR FISHUN, NOT AGRICULTURAL REVOLUSHUN. TAKIN AN OUTSIDE VIEW OV SCENARIO DOESNT WERK WHEN U DOAN USE TEH RITE REFERENCE CLAS. TEH CREASHUN OV RLY POWERFUL OPTIMIZASHUN PROCES, LIEK NUCLEAR FISHUN, IZ FENOMENON OUTSIDE TEH REALM OV ECONOMICS. DIS AR TEH NOT TECHNOLOGY: TECHNOLOGY IZ TOOL WE USE 2 ENHANCE R SMARTS. CREATIN SMARTS THEMSELVEZ IZ WHOLE NOTHR REALM.

  • mjgeddes

    I seem to recall Hernán Cortés:

    “Cortés’ contingent consisted of 11 ships carrying about 100 sailors, 530 soldiers (including 30 crossbowmen and 12 arquebusiers), a doctor, several carpenters, at least eight women, a few hundred Cuban Natives and some Africans, both freedmen and slaves.”

    This was apparently enough to ‘do over’ the entire Aztec Empire! Plently of other examples from history of small forces with a small technology advantage completely ‘doing over’ a large number of opponents.

    Actually, I’m sure that ‘overwhelming the rest of the world’ would be ridiciously easy. As you say, “we have become more dependent on one another via a more elaborate international division of labor”. That’s a big exploitable weakness, just get control of few key pieces of intrastructure in a surprise attack and the whole thing falls over long before any opponents can cooperate. Your faith in ‘the market’ is quite misplaced.

    In fact a takeover may not even be obvious. ‘The greatest trick the devil ever pulled was convincing the world he didn’t exist’. Dramatic displays of power are a human status display after all. A super-intelligence could simply manipulate things from the shadows, coordinating events and pulling strings to achieve desired results – a bit of clever oratory here, getting a few key people into power there, accumulating a bit of wealth to do this and that – make it all look like chance – and run the show in secret, like ‘Hitch-Hikers Guide To the Galaxy’ where the colorful ‘President of the Galaxy’ Zaphod Beeblebrox was actually all a front for an anonymous guy in hut somewhere who was the only one who really knew what was going on.

  • http://entitledtoanopinion.wordpress.com TGGP

    Robert, somewhat nitpicking but I think the Allied powers are actually more vulnerable to the charge of [pinky&thebrain]TRYING TO TAKE OVER THE WORLD[/pinky&thebrain] than the Axis, who were never nearly as coordinated. The nukes we did drop (which, combined with tests, used up most of our supply) also killed fewer people than our conventional firebombing did.

    mjgeddes:
    I think it noteworthy that the Aztecs had many defeated vassals underneath them who revolted when the opportunity came along, so it wasn’t just Cortez and his merry men.

    • http://entitledtoanopinion.wordpress.com TGGP

      I should have linked to this interview with John Mueller. Nukes? No Big Whoop

    • mjgeddes

      It was Cortes who instigated the rebellions so that’s an example of his application of social intelligence (‘a bit of clever oratory here and there’).

    • James Daniel Miller

      Cotres was able to trick many of the Axtecs’ vassals into supporting him even though the vassals would have been better off if Cortes had been defeated.

      • Anonymous

        Better off materially perhaps, but there’s a lot of painful hate to suppress before you reach Realpolitik.

      • ad

        Can you prove that the vassals would have been better off if Cortes had been defeated?

        Given a choice between overlords who wanted a lot of gold and silver, and overlords who wanted to cut my heart out, I think I would be better off with the former option.

        I might point out that the vast majority of deaths were due to epidemics, and the bugs presumably did not care which human won the war.

  • Pingback: Accelerating Future » Assorted Links 2/1/2010

  • XOR

    Tech gets harder to master. Newer, more difficult tech cannot become distributed across teams as fast as earlier, simpler tech.
    Can any team be so superior that no other team can replicate the research? I’d argue it gets more probable by the day.

    There will come a point when no usable information will seep outside the walls of firms or small teams. Some techs will evolve into black boxes. You really can’t tell exactly what goes into a CPU chip these days.

    There’s a high probability that there will be, if there already aren’t, “untouchable” tech firms, with so tricky in-house theoretical knowledge, research and production methods and equipment, that no matter how much resources are thrown at it, competitors can’t catch up.

    And if these people are smart enough to stay quiet and out of sight, which they will be, the competition won’t even know what to look for, until it’s way way too late. The spy organizations of the world know this.