Outside View of Singularity

An inside view forecast is generated by focusing on the case at hand, by considering the plan and the obstacles to its completion, by constructing scenarios of future progress, … The outside view … focuses on the statistics of a class of cases chosen to be similar in relevant respects to the present one.  [Kahneman and Lovallo '93]

Most everything written about a possible future singularity takes an inside view, imagining details of how it might happen.  Yet people are seriously biased toward inside views, forgetting how quickly errors accumulate when reasoning about details.  So how far can we get with an outside view of the next singularity?

Taking a long historical long view, we see steady total growth rates punctuated by rare transitions when new faster growth modes appeared with little warning.  We know of perhaps four such "singularities": animal brains (~600MYA), humans (~2MYA), farming (~1OKYA), and industry (~0.2KYA).  The statistics of previous transitions suggest we are perhaps overdue for another one, and would be substantially overdue in a century.  The next transition would change the growth rate rather than capabilities directly, would take a few years at most, and the new doubling time would be a week to a month. 

Many are worried that such a transition could give extra advantages to some over others.  For example, some worry that just one of our mind children, an AI in some basement, might within the space of a few weeks suddenly grow so powerful that it could take over the world.  Inequality this huge would make it very important to make sure the first such creature is "friendly." 

Yesterday I said yes, advantages do accrue to early adopters of new growth modes, but these gains seem to have gotten smaller with each new singularity.  Why might this be?  I see three plausible contributions:

  1. The number of generations per growth doubling time has decreased, leading to less inequality per doubling time.  So if the time duration of the first movers advantage, before others find similar innovations, is some fixed ratio of a doubling time, that duration contains fewer generations.   
  2. When lineages cannot share information, then the main way the future can  reflect a new insight is via insight-holders displacing others.  As we get better at sharing info in other ways, the first insight-holders displace others less.
  3. Independent competitors can more easily displace each another than interdependent ones.  For example, since the unit of the industrial revolution seems to have been Western Europe, Britain who started it did not gain much relative to the rest of Western Europe, but Western Europe gained more substantially relative to outsiders.  So as the world becomes interdependent on larger scales, smaller groups find it harder to displace others. 

The first contribution is sensitive to changes in generation times, but the other two come from relatively robust trends.  An outside view thus suggests only a moderate amount of inequality in the next singularity – nothing like a basement AI taking over the world.

Excess inside viewing usually continues even after folks are warned that outside viewing works better; after all, inside viewing better show offs inside knowledge and abilities.  People usually justify this via reasons why the current case is exceptional.  (Remember how all the old rules didn’t apply to the new dotcom economy?)  So expect to hear excuses why the next singularity is also an exception where outside view estimates are misleading.  Let’s keep an open mind, but a wary open mind. 

GD Star Rating
a WordPress rating system
Tagged as:
Trackback URL:
  • http://profile.typekey.com/felix_typekey/ Felix

    Taking the long historical long view, off the top of the head, might there also be:

    1) Eyes
    2) Hands
    3) Flight
    4) Animal/plant split
    5) Two legged walking
    6) Symbolic recording
    7) Printing press / publishing
    8) Literal recording (e.g. audio/video taping)
    9) Audio communication
    10) Electronic communication outside the near-visual bandwidths
    11) Multicellular life
    12) Life on the moon
    13) Elvis
    14) Any bubble in which “all the rules have changed”
    15) That new neighbor who mows his lawn at 8AM.

    etc.

    I’m wondering what the criteria for a “singularity” is. How is such a thing recognized, let alone measured?

    Robin, are you, perhaps, over thinking from too little information and too much assumption?

    Put another way, isn’t this sensitivity to accelerating changes a reflection of a bias that would be expected of any information processing system that’s built to spot 80/20-rule situations in its input? That is, an information processing system not unlike our nervous system.

  • http://hanson.gmu.edu Robin Hanson

    Felix, if you read a claim of mind you find unsupported next to a link, you might try following the link. See also here. I’ve been building toward this summary post in many other posts, and it can’t be a summary if it includes all the previous details.

  • steven

    But wait! Out of all important historical events, the vast majority weren’t growth mode transitions. So according to my more-outside-than-you perspective, transhuman AI isn’t even going to cause a growth mode transition. You may think you have inside information, but that just means you’re biased.

  • Caledonian

    You may think you have inside information, but that just means you’re biased.

    That statement sums up virtually every post on this blog.

  • Grant

    One key difference is that (at least while they were going on), the means of the industrial and agricultural revolutions weren’t considered dangerous and having the potential to take over the world or destroy the human race. If AGI is achieved, governments could attempt to regulate and/or monopolize it in ways that was never done with industry and agriculture.

    This doesn’t seem at all likely to happen with networks of interconnected SIs, though.

  • A. Madden

    Something that kind of jumps out at me:

    Evolution happens without divine guidance, mutations are tested, if they work they may be adopted.
    It appears that human innovations come about in the same guess and commit way.

    So the reason for the big shortening between meta innovations, i.e. growth rate transition, is because of the increased rapidity that ideas are had and tested.

    I (me) might try to work that into my own estimate. Also that means that AI or whatever the next innovation happens to be (although with this model AI stands out, because it would allow the trial and error process to be taken up by machines as well as humans) might come about accidentally. But this seems realistic given the popularity (and I know I am an occasion user) of trial and error style computer programming.

  • http://hanson.gmu.edu Robin Hanson

    Steven, yes, the next important historical event is not likely to be a growth mode transition, for the reason you give. But I don’t agree we can assume transhuman AI is in fact the next important historical event.

    Grant, the question here is exactly what odds we should give to an AI transition allowing a small part to take over the world or destroy the human race. I’m saying an outside view gives low odds; you are apparently estimating high odds based on an inside view.

  • steven

    But I don’t agree we can assume transhuman AI is in fact the next important historical event.

    I didn’t say or imply that “we can assume transhuman AI is in fact the next important historical event”. But transhuman AI would be an important historical event, so going by your logic, unless it’s a special historical event, it would probably not cause a growth mode transition. For every reason you can give me why transhuman AI is special among historical events, I can give you a reason why a transhuman-AI-caused growth mode transition is special among growth mode transitions. By throwing away enough information you can half-prove anything.

  • spindizzy

    The “Dreams of Autarky” link is the first time this site has drawn my attention to a bias I strongly experience but hadn’t considered before.

    From my perspective, the bias might result from my strongly risk-averse personality. I wonder if it correlates with other behaviours e.g. hoarding, debt avoidance, illusion of control, social introversion or libertarian alignment? I can imagine them all being part of a package. :)

    On an intellectual level, I’m not sure that interdependent systems are necessarily more risky. Risk in largely interdependent systems could come from two sources:

    (1) susceptibility to external threats, which depends on the degree to which component parts can be substituted for one another.

    (2) susceptibility to internal gaming, which does seem to be inherently greater in more interdependent systems.

    BTW, Robin, I’m sure you are busy but if you get the chance to spoonfeed the maths behind your previous post (the genocide one) I would certainly appreciate it.

  • http://profile.typekey.com/halfinney/ Hal Finney

    Steve, the “outside view” does not specifically predict transhuman AI as the driver for the (next) singularity. It merely predicts that it must be something that can cause the economy to double in size every couple of weeks or so. And this would imply that it must be a “meta innovation”, something that speeds up virtually everything about human society. Transhuman AI, or at least some form of super-intelligence, is merely the most plausible (or perhaps only) candidate on the drawing boards that might accomplish this.

  • http://transhumangoodness.blogspot.com Roko

    Robin: “An outside view thus suggests only a moderate amount of inequality in the next singularity – nothing like a basement AI taking over the world.”

    Steven: “For every reason you can give me why transhuman AI is special among historical events, I can give you a reason why a transhuman-AI-caused growth mode transition is special among growth mode transitions. By throwing away enough information you can half-prove anything.”

    There are a lot of superintelligence scenarios that would fit in with Robin’s prediction: gradual improvement in cognitive enhancement technologies, whole-brain emulation technology and copyable uploads being the two most obvious. Very slow takeoff recursively self improving AGI might also fit in with this – if an AGI gets smarter quite slowly, e.g. with a timescale of ~ 1 year to go from “average human” to “most intelligent human on the planet” intelligence level.

    with no extra information to go on, one would have to conclude that Robin is probably right, with probability 3/4. (Since 3 out of 4 of the proposed superintelligence pathways seem to follow the standard pattern)

    However, I don’t think that the last scenario – fast recursively improving AGI – fits with any of the previous events or patterns. Two changes – a change of substrate (biology to silicon) and fixed to recursively improving intelligence happening at the same time seem to me to be a more profound change than any of the other singularities.

  • http://brokensymmetry.typepad.com Michael F. Martin

    Abraham Lincoln had a view on what would trigger the next singularity:

    http://brokensymmetry.typepad.com/broken_symmetry/2008/04/lessons-from-li.html

  • steven

    I don’t accept the dichotomy of blind trend-extrapolation (outside view) vs making up detailed stories about how it might happen (inside view). Theoretical non-story arguments like the various human/computer differences seem to me to make a hard takeoff plausible, and trend-extrapolation seems to me to give only evidence that’s 1) weak and 2) causally distant (uninformative conditional on more specific knowledge).

    Robin’s points 2 and 3 don’t apply if a basement AI doesn’t need to share information or depend on other thinkers.

  • Grant

    Grant, the question here is exactly what odds we should give to an AI transition allowing a small part to take over the world or destroy the human race. I’m saying an outside view gives low odds; you are apparently estimating high odds based on an inside view.
    I wasn’t giving any odds at all, I was just pointing out that a large number of people would fear AGI more than say, irrigation or coal mines. It seems to me that fear would manifest itself somehow, and alter the way in which the singularity unfolds.

    Suppose it takes one generation for a skeptical nation to overcome an irrational fear of AGI. One generation is nothing to agriculture, little to industry, and annoying to IT (creating a significant gap between older computer illiterates and the younger generation). What would it mean in the time-frame of AGI? It seems to me that unlike previous revolutions, AGI could itself advance faster than many societies could politically and culturally adopt it, meaning a few would be given huge advantages over many.

  • Tim Tyler

    By comparing the origin of multicellularity, the origin of human brains, the origin of farming, and the origin of industry, we conclude that hypothetical first movers in these transitions gained progressively less from them?

    It all seems pretty vague to me. Industry and farming spread horizontally, so you wouldn’t /expect/ the DNA of their owners to benefit in the first place – rather the associated ideas are what spreads – at the expense of other ideas about how to live.

    Anyway, the conclusion seems to be that the inventors of AI will enjoy few special benefits, and not turn into future versions of Bill Gates. That seems fair enough:; if they get incredibly rich and powerful, it probably won’t last for long. They will soon enough get wiped out by vastly superior technology… along with all the rest of the ancient, crappy, unmodified humans.

  • http://hanson.gmu.edu Robin Hanson

    Tim, until there is a substantial space or deep Earth economy/ecology all transitions will spread “horizontally.” Being “wiped out” is the sort of transition inequality I’m saying the outside view doesn’t favor.

    Steven, you are repeating the standard argument inside viewers give against outside views, that it neglects crucial info.

  • Tim Tyler

    Re: “until there is a substantial space or deep Earth economy/ecology all transitions will spread “horizontally.””

    The idea of horizontal transmission here was to illustrate that farming and agriculture were heritable *ideas*, and may well have practically wiped out the
    other *ideas* that they competed with.

    AI is also an idea, and one that is capable of spreading rapidly – but unlike farming and agriculture it is a replacement technology for an important DNA-based adaptation: brains. Rather than competing only with other ideas, it will more effectively compete with humans themselves – in conjunction with various associated developments in sensors and actuators, of course.

    Re: “what the outside view doesn’t favor”. I see what you are saying – I just think it’s nonsense. The idea of looking at previous important developments, and trying to use them to see into the future is a good one, but the relevant important developments are really the previous genetic takeovers. Agriculture
    and industry transitions throw only very limited light on AI. It’s like trying
    to predict the properties of neutron stars by looking at gold and lead.

    The technology advances we have seen so far tend to increase inequalities – by allowing wealth and power to be concentrated. Inequalities are greater now than ever before – with celebrities earning billions of dollars while much of the world is on the bread line. Further technological progress seems extremely likely to widen this gap.

  • http://hanson.gmu.edu Robin Hanson

    Tim virtually every innovation is an “idea.” You seem to be saying the relevant category to use for an outside view is “genetic takeovers”, but since you are using “genetic” metaphorically I find this category hard to understand. Please try to be more precise so we can evaluate your suggestion. It is true that per-capita wealth inequality across the world is at an all time high, but this is mainly because the wealth peaks are at an all time high, while the valleys remain at their lowest feasible level.

  • http://www.ribbonfarm.com Venkat

    Perhaps you guys have addressed this elsewhere, but given that most evolution (technological, social, political…) seems to follow the jumping punctuated-equilibria model (cf. Thomas Kuhn, Joel Mokyr, McLuhan…), how do you separate the wheat from the chaff? The eternal behaviorist dilemma applies here.

  • Tim Tyler

    Genetic Takeover, is a concept from Genetic Takeover – and the mineral origins of life, A. G. Cairns-Smith, Cambridge University Press, 1982.

    Here is a page by me on the topic:

    http://originoflife.net/takeover/

    It is not my suggestion that we are witnessing a modern Genetic Takeover:

    “machines could carry on our cultural evolution, including their own increasingly rapid self-improvement, without us, and without the genes that built us. It will be then that our DNA will be out of a job, having passed the torch, and lost the race, to a new kind of competition. The genetic information carrier, in the new scheme of things, will be exclusively knowledge, passed from mind to artificial mind.”

    Human Culture – A Genetic Takeover Underway – Moravec, 1987

    “Millions of years later, another change is under way in how information passes from generation to generation. Humans evolved from organisms defined almost totally by their organic genes. We now rely additionally on a vast and rapidly growing corpus of cultural information generated and stored outside our genes – in our nervous systems, libraries, and, most recently, computers.

    Our culture still depends utterly on biological human beings, but with each passing year our machines, a major product of the culture, assume a greater role in its maintenance and continued growth. Sooner or later our machines will become knowledgeable enough to handle their own maintenance, reproduction and self-improvement without help. When this happens the new genetic takeover will be complete. [...]”

    – Moravec, 1988

    “Cultural evolution is many orders of magnitude faster than DNA-based evolution, which sets one even more to thinking of the idea of ‘takeover’. And if a new kind of replicator takeover is beginning, it is conceivable that it will take off so far as to leave its parent DNA (and its grandparent clay if Cairns-Smith is right) far behind. If so, we may be sure that computers will be in the van.”

    – Dawkins, 1982.

  • http://hanson.gmu.edu Robin Hanson

    Tim you say “the relevant important developments are really the previous genetic takeovers”, a phrase I now understand better, but the right way to do multivariate analysis is not to first choose the “right” data. Instead one collects as much relevant data as possible and then sees what statistical inference says about which data can in fact be ignored without changing the results much. Saying “throw this out of your dataset, it is not relevant” is less useful than saying “you’ve missed this relevant data, your conclusions will change when you include them.”

  • Tim Tyler

    I did not say “throw this out of your dataset, it is not relevant”.

    We do have more data to go on than just the recent economic successes of our ancestors.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    Singularities in principle don’t seem that hard to model. Have people tried modeling how quickly an agent takes over a game (if it does) with the same analytical algorithms but quicker processing speed? Have they looked at how interdependency affects that? Robin, you have some interesting hypotheses that seem open to be tested in a variety of ways.

  • Tim Tyler

    As I mentioned, one point of disanalogy between the farming/industrial developments and AI is that farming didn’t put any humans out of work, while the humans put out of work by industry had other places in the economy to go. With AI, it effectively takes out most of the economy out of human hands, maybe leaving a few vacancies in the service industries.

    Another disanalogy between the farming/industrial developments and AI is that is is hard to keep farming and industrial developments secret – they are typically too easy to reverse engineer. Whereas with AI, if you keep the code on your server, it is extremely difficult for anyone to reverse engineer it. It can even be deployed fairly securely in robots – if tamper-proof hardware is employed.

    Both of these differences suggest that AI may be more effective at creating inequalities than either farming or industry was.

    However, ultimately, whether groups of humans benefit differentially from AI or not probably makes little odds.

    The bigger picture is that it represents the blossoming of the new replicators into physical minds and bodies – so there is a whole new population of non-human entities to consider, with computers for minds and databases for genomes.

  • Chip

    Surely what I am about to write is obvious, and probably old. During World War II, when physicists began to realize the destructive potential of nuclear weapons, Albert Einstein was chosen by his peers to approach President Roosevelt. Einstein was perhaps not the best informed of the group, but he was the best known, and was thought to be able to get Roosevelt’s ear, as he did. In response, Roosevelt was able to convene all the greatest Western minds in physics, mathematics, and engineering to work together for a rapid solution to the problem. Clearly, the importance of the development of recursively self-improving super-human intelligence has got to be, almost by definition, greater than all other current problems, since it is the one project that would allow for the speedy solution of all other problems. Is there no famous person or persons in the field, able to organize his peers, and with access to the government such that an effort similar to the Manhattan Project could be accomplished? The AI Institute has one research fellow, and are looking for one more. They have a couple of fund-raisers, but most of the world is unaware of AI altogether. This won’t get it done in a reasonable time-frame. Your competitors may well be backed by their governments.

    While the eventual use of the Manhattan Project’s discoveries is about as far from Friendly AI as imaginable, the power of super-human recursive AI is such that no matter by whom or where it is developed it will become the eminent domain of a government, much like the most powerful Cray computers. You might as well have their money and all the manpower right from the start, and the ability to influence it’s proper use.

    Can/will this be done?