An inside view forecast is generated by focusing on the case at hand, by considering the plan and the obstacles to its completion, by constructing scenarios of future progress, … The outside view … focuses on the statistics of a class of cases chosen to be similar in relevant respects to the present one. [Kahneman and Lovallo ’93]
Most everything written about a possible future singularity takes an inside view, imagining details of how it might happen. Yet people are seriously biased toward inside views, forgetting how quickly errors accumulate when reasoning about details. So how far can we get with an outside view of the next singularity?
Taking a long historical long view, we see steady total growth rates punctuated by rare transitions when new faster growth modes appeared with little warning. We know of perhaps four such "singularities": animal brains (~600MYA), humans (~2MYA), farming (~1OKYA), and industry (~0.2KYA). The statistics of previous transitions suggest we are perhaps overdue for another one, and would be substantially overdue in a century. The next transition would change the growth rate rather than capabilities directly, would take a few years at most, and the new doubling time would be a week to a month.
Many are worried that such a transition could give extra advantages to some over others. For example, some worry that just one of our mind children, an AI in some basement, might within the space of a few weeks suddenly grow so powerful that it could take over the world. Inequality this huge would make it very important to make sure the first such creature is "friendly."
Yesterday I said yes, advantages do accrue to early adopters of new growth modes, but these gains seem to have gotten smaller with each new singularity. Why might this be? I see three plausible contributions:
The number of generations per growth doubling time has decreased, leading to less inequality per doubling time. So if the time duration of the first movers advantage, before others find similar innovations, is some fixed ratio of a doubling time, that duration contains fewer generations.
When lineages cannot share information, then the main way the future can reflect a new insight is via insight-holders displacing others. As we get better at sharing info in other ways, the first insight-holders displace others less.
Independent competitors can more easily displace each another than interdependent ones. For example, since the unit of the industrial revolution seems to have been Western Europe, Britain who started it did not gain much relative to the rest of Western Europe, but Western Europe gained more substantially relative to outsiders. So as the world becomes interdependent on larger scales, smaller groups find it harder to displace others.
The first contribution is sensitive to changes in generation times, but the other two come from relatively robust trends. An outside view thus suggests only a moderate amount of inequality in the next singularity – nothing like a basement AI taking over the world.
Excess inside viewing usually continues even after folks are warned that outside viewing works better; after all, inside viewing better show offs inside knowledge and abilities. People usually justify this via reasons why the current case is exceptional. (Remember how all the old rules didn’t apply to the new dotcom economy?) So expect to hear excuses why the next singularity is also an exception where outside view estimates are misleading. Let’s keep an open mind, but a wary open mind.