Dreams of Autarky

Selections from my 1999 essay "Dreams of Autarky":

[Here is] an important common bias on "our" side, i.e., among those who expect specific very large changes. … Futurists tend to expect an unrealistic degree of autarky, or independence, within future technological and social systems.  The cells in our bodies are largely-autonomous devices and manufacturing plants, producing most of what they need internally. … Small tribes themselves were quite autonomous. … Most people are not very aware of, and so have not fully to terms with their new inter-dependence.  For example, people are surprisingly willing to restrict trade between nations, not realizing how much their wealth depends on such trade. … Futurists commonly neglect this interdependence … they picture their future political and economic unit to be the largely self-sufficient small tribe of our evolutionary heritage.  … [Here are] some examples. …

[Many] imagine space economies almost entirely self-sufficient in mass and energy. … It would be easier to create self-sufficient colonies under the sea, or in Antartica, yet there seems to be little prospect of or interest in doing so anytime soon. …

Eric Drexler … imagines manufacturing plants that are far more independent than in our familiar economy. … To achieve this we need not just … control of matter at the atomic level, but also the complete automation of the manufacturing process, all embodied in a single device… complete with quality control, waste management, and error recovery.  This requires "artificial intelligence" far more advanced than we presently possess. …

Knowledge is [now] embodied in human-created software and hardware, and in human workers trained for specific tasks. … It has usually been cheaper to leave the CPU and communication intensive tasks to machines, and leave the tasks requiring general knowledge to people.  Turing-test artificial intelligence instead imagines a future with many large human-created software modules … far more independent, i.e., less dependent on context, than existing human-created software. …

[Today] innovations and advances in each part of the world depending on advances made in all other parts of the world. … Visions of a local singularity, in contrast, imagine that sudden technological advances in one small group essentially allow that group to suddenly grow big enough to take over everything. … The key common assumption is that of a very powerful but autonomous area of technology.  Overall progress in that area must depend only on advances in this area, advances that a small group of researchers can continue to produce at will. And great progress in this area alone must be sufficient to let a small group essentially take over the world. …

[Crpto creditial] dreams imagine that many of our relationships will be exclusively digital, and that we can keep these relations independent by separating our identity into relationship-specific identities. … It is hard to imagine potential employers not asking to know more about you, however. … Any small information leak can be enough to allow others to connect your different identities. …

[Consider also] complaints about the great specialization in modern academic and intellectual life.  People complain that ordinary folks should know more science, so they can judge simple science arguments for themselves. … Many want policy debates to focus on intrinsic merits, rather than on appeals to authority.  Many people wish students would study a wider range of subjects, and so be better able to see the big picture.  And they wish researchers weren’t so penalized for working between disciplines, or for failing to cite every last paper someone might think is related somehow.

It seems to me plausible to attribute all of these dreams of autarky to people not yet coming fully to terms with our newly heightened interdependence. … We picture our ideal political unit and future home to be the largely self-sufficient small tribe of our evolutionary heritage. … I suspect that future software, manufacturing plants, and colonies will typically be much more dependent on everyone else than dreams of autonomy imagine. Yes, small isolated entities are getting more capable, but so are small non-isolated entities, and the later remain far more capable than the former. The riches that come from a worldwide division of labor have rightly seduced us away from many of our dreams of autarky. We may fantasize about dropping out of the rat race and living a life of ease on some tropical island. But very few of us ever do.

So academic specialists may dominate intellectual progress, and world culture may continue to overwhelm local variations. Private law and crypto-credentials may remain as marginalized as utopian communities have always been. Manufacturing plants may slowly get more efficient, precise, and automated without a sudden genie nanotech revolution. Nearby space may stay un-colonized until we can cheaply send lots of mass up there, while distant stars may remain uncolonized for a long long time. And software may slowly get smarter, and be collectively much smarter than people long before anyone bothers to make a single module that can pass a Turing test.

The relevance to my discussion with Eliezer should be obvious.  My next post will speak more directly.

GD Star Rating
Tagged as:
Trackback URL:
  • Carl Shulman

    “The key common assumption is that of a very powerful but autonomous area of technology. Overall progress in that area must depend only on advances in this area, advances that a small group of researchers can continue to produce at will. And great progress in this area alone must be sufficient to let a small group essentially take over the world. …”
    A Manhattan Project backed by Japan or the U.S. or China is a ‘small group?’ What if an improvement in em efficiency gives it a supermajority of the world’s top-notch researchers until that advance is duplicated?

  • billswift

    Slightly off topic, but I just reviewed a book for Amazon that makes the point of the interconnections, and argues that it greatly reduces the risk of war.

    Producing Security: Multinational Corporations, Globalization, and the Changing Calculus of Conflict (Princeton Studies in International History and Politics) by Stephen G Brooks

    My review
    The main thesis of the book is that since almost anything manufactured today that is even moderately complicated has its manufacture integrated in multiple locations around the world, therefore one of the main causes of aggressive war, seizure of valuable properties, is averted. It would seem that Iraq’s invasion of Kuwait for oil is a counter-argument, but natural resources, even oil, are becoming less and less valuable relative to other things. Anyone, like Iraq, that would consider any natural resources particularly valuable would be too weak to actually get away with the aggression. The book is rather dry and academic in tone, but thoroughly argued. Highly recommended for anyone interested in military or international trade issues.

  • Bill, yes, good point.

    Carl, the Manhattan Project was probably the largest isolated research project in history, relative to world product at the time. And even it was a pretty small fraction.

  • James Andrix

    This isn’t directly related to this post, but I’d like to throw a widely different perspective on the fire. There’s a whole other subculture that has (I think) good arguments that the future will be very different than anything we consider here.

  • Carl Shulman


    It was a small part of total GDP, but a large portion of the world’s best scientists in relevant domains.

  • We generally specialize when it comes to bugs in computer programs – rather than monitoring their behavior and fixing them ourselves, we inform the central development authority for that program of the problem, and rely on them to fix it everywhere.

    The benefit from automation depends on the amount of human labor already in the process, a la the bee-sting principle of poverty. Automating one operation while many others are still human-controlled is a marginal improvement, because you can’t run at full speed or fire your human resources department until you’ve gotten rid of all the humans.

    The incentive for automation depends on the number of operations being performed. If you’re doing something a trillion times over, it has to be automatic. We pay whatever energy cost is required to make transistor operations on chips fully reliable, because it would be impossible to have a chip if each transistor required human monitoring. DNA sequencing is increasingly automated as we try to do more and more of it.

    With nanotechnology it is more possible to automate because you are designing all the machine elements of the system on a finer grain, closer to the level of physical law where interactions are perfectly regular; and more importantly, closing the system: no humans wandering around on your manufacturing floor.

    And the incentive to automate is tremendous because of the gigantic number of operations you want to perform, and the higher levels of organization you want to build on top – it is akin to the incentive to automate the internal workings of a computer chip.

    Now with all that said, I find it extremely plausible that, as with DNA sequencing, we will only see an increasing degree of automation over time, rather than a sudden fully automated system appearing ab initio. The operators will be there, but they’ll handle larger and larger systems, and finally, in at least some cases, they’ll disappear. Not assembly line workers, sysadmins. Bugs will continue to be found but their handling will be centralized and one-off rather than local and continuous. The system will behave more like the inside of a computer chip than the inside of a factory.

    – such would be my guess, not to materialize instantly but as a trend over time.

  • a person

    This line was particularly illuminating:

    “Yes, small isolated entities are getting more capable, but so are small non-isolated entities, and the later remain far more capable than the former.”

  • Eliezer, yes the degree of automation will probably increase incrementally. As I explore somewhat here, there is also the related issue of the degree of local production, vs. importing inputs made elsewhere. A high degree of automation need not induce a high degree of local production. Perhaps each different group specializes in automating certain aspects of production, and they coordinate by sending physical inputs to each other.

  • Tim Tyler

    Whether the sting of poverty principle applies depends on whether what is being automated lies in parallel with some human operation on the critical path – and there will be plenty of cases where that’s not true.

  • Robin, numerous informational tasks can be performed far more quickly by special-purpose hardware, arguably analogous to more efficient special-purpose molecular manufacturers. The cost of shipping information is incredibly cheap. Yet the typical computer contains a CPU and a GPU and does not farm out hard computational tasks to distant specialized processors. Even when we do farm out some tasks, mostly for reason of centralizing information rather than computational difficulty, the tasks are still to large systems of conventional CPUs. Even supercomputers are mostly made of conventional CPUs.

    This proves nothing, of course; but it is worth observing of the computational economy, in case you have some point that differentiates it from the nanotech economy. Are you sure you’re not being prejudiced by the sheer traditionalness of moving physical inputs around through specialized processors?

  • Tim Tyler

    Yes, small isolated entities are getting more capable, but so are small non-isolated entities, and the later remain far more capable than the former.

    I mentioned this recently on thhe thread two down from this one, but just in case it isn’t sinking in, the main issue in this area is not with isolated entities – rather it is with folks like Google – who take from the rest of the world, but don’t contribute everything they build back again – and so develop their own self-improving ecosystem that those outside the company have no access to. The only cost they pay involves not gaining in the short term by monetising their private tech (by sharing it) – and that cost can be swallowed gradually, a drop at a time.

  • Eliezer, both computing and manufacturing are old enough now to be “traditional”; I expect each mode of operation is reasonably well adapted to current circumstances. Yes future circumstances will change, but do we really know in which direction? Manufacturing systems may well also now ship material over distances “for reason of centralizing information”.

  • Pingback: AI Foom Debate: Post 23 – 28 | wallowinmaya()