Kaczynski’s Collapse Theory

Many people argue that we should beware of foreigners, and people from other ethnicities. Beware of visiting them, trading with them, talking to them, or allowing them to move here. The fact that so many people are willing to argue for such conclusions is some evidence in favor of them. But the fact that the arguments offered are so diverse, and so often contradict one another, takes away somewhat from the strength of this evidence. This pattern looks like people tend to have a preconceived conclusion for which they opportunistically embrace any random arguments they can find.

Similarly, many argue that we should be wary of future competition, especially if that might lead to concentrations of power. I recently posted on my undergrad law & econ students’ largely incoherent fears of one group taking over the entire solar system, and how Frederick Engels expresses related fears back in 1844. And I’ve argued on this blog with my ex-co-blogger regarding his concerns that if future AI results from competing teams, one team might explode to suddenly take over the world. In this post I’ll describe Ted “Unabomber” Kaczynski’s rather different theory on why we should fear competition leading to concentration, from his recent book Anti Tech Revolution.

Kaczynski claims that the Fermi paradox, i.e., the fact that the universe looks dead everywhere, is explained by the fact that technological civilizations very reliably destroy themselves. When this destruction happens naturally, it is so thorough that no humans could survive. Which is why his huge priority is to find a way to collapse civilization sooner, so that at least some humans survive. Even a huge nuclear war is preferable, as at least some people survive that.

Why must everything collapse? Because, he says, natural-selection-like competition only works when competing entities have scales of transport and talk that are much less than the scale of the entire system within which they compete. That is, things can work fine when bacteria who each move and talk across only meters compete across an entire planet. The failure of one bacteria doesn’t then threaten the planet. But when competing systems become complex and coupled on global scales, then there are always only a few such systems that matter, and breakdowns often have global scopes.

Kaczynski dismisses the possibility that world-spanning competitors might anticipate the possibility of large correlated disasters, and work to reduce their frequency and mitigate their harms. He says that competitors can’t afford to pay any cost to prepare for infrequent problems, as such costs hurt them in the short run. This seems crazy to me, as most of the large competing systems we know of do in fact pay a lot to prepare for rare disasters. Very few correlated disasters are big enough to threaten to completely destroy the whole world. The world has had global scale correlation for centuries, with the world economy growing enormously over that time. And yet we’ve never even seen a factor of two decline, while at least thirty factors of two would be required for a total collapse. And while it should be easy to test Kaczynski’s claim in small complex systems of competitors, I know of no supporting tests.

Yet all dozen of the reviews I read of Kaczynski’s book found his conclusion here to be obviously correct. Which seems to me evidence that a great many people find the worry about future competitors to be so compelling that they endorse most any vaguely plausible supporting argument. Which I see as weak evidence against that worry.

Yes of course correlated disasters are a concern, even when efforts are made to prepare against them. But its just not remotely obvious that competition makes them worse, or that all civilizations are reliably and completely destroyed by big disasters, so much so that we should prefer to start a big nuclear war now that destroys civilization but leaves a few people alive. Surely if we believed his theory a better solution would be to break the world into a dozen mostly isolated regions.

Kaczynski does deserve credit for avoiding common wishful thinking in some of his other discussion. For example, he says that we can’t much control the trajectory of history, both because it is very hard to coordinate on the largest scales, and because it is hard to estimate the long term consequences of many choices. He sees how hard it is for social movements to actually achieve anything substantial. He notes that futurists who expect to achieve immortality and then live for a thousand years too easily presume that a fast changing competitive world will still have need for them. And while I didn’t see him actually say it, I expect he’s the sort of person who’d make the reasonable argument that individual humans are just happier in a more forager-like world.

Kaczynski isn’t stupid, and he’s more clear-headed than most futurists I read. Too bad his low mood leans him so strongly to embrace a poorly-argued inevitable collapse story.

Some book quotes on his key claim:

In any environment that is sufficiently rich, self­-propagating systems will arise, and natural selection will lead to the evolution of self-propagating systems having increasingly complex, subtle, and sophisticated means of surviving and propagating themselves. … In the short term, natural selection favors self­ propagating systems that pursue their own short-term advantage with little or no regard for long-term consequences. …

Self-propagating subsystems of a given supersystem tend to become dependent on the supersystem and on the specific conditions that prevail within the supersystem. … In the event of the destruction of the supersystem or of any drastic acceleration of changes in the conditions prevailing within the supersystem, the subsystems can neither survive nor propagate themselves. … But as long as the supersystem exists and remains more or less stable, natural selection … disfavors those subsystems that “waste” some of their resources in preparing themselves to survive the eventual destabilization of the super­ system. … Natural selection tends to produce some self-propagating human groups that operate over regions approaching the maximum size allowed by the available means of transportation and communication. … [Today,] natural selection tends to create a world in which power is mostly concentrated in the possession of a relatively small number of global self-propagating systems. … If small-scale self-prop systems organize themselves into a coalition having worldwide influence, then the coalition will itself be a global self-prop system. … Intuition tells us that desperate competition among the global self-prop systems will tear the world-system apart. ..

Earth’s self­ prop systems will have become dependent for their survival on the fact that conditions have remained within these limits. Large-scale self-prop human groups, as well as any purely machine-based self-prop systems, will be dependent also on conditions of more recent origin relating to the way the world-system is organized; for example, conditions relating to economic relationships. The rapidity with which these conditions change must remain within certain limits, else the self-prop systems will not survive. .. If conditions ever vary wildly enough outside the limits, then, with near certainty, all of the world’s more complex self-prop systems will die without progeny. .. With several self-prop systems of global reach, armed with the colossal might of modern technology and competing for immediate power while exercising no restraint from concern for long-term consequences, it is extremely difficult to imagine that conditions on this planet will not be pushed far outside all earlier limits and batted around so erratically that for any of the Earth’s more complex self-prop systems, including complex biological organisms, the chances of survival will approach zero. …

There is another way of seeing that this situation will lead to radical disruption of the world-system. Students of industrial accidents know that a system is most likely to suffer a catastrophic breakdown when (i) the system is highly complex (meaning that small disruptions can produce unpredictable consequences), and (ii) tightly coupled (meaning that a breakdown in one part of the system spreads quickly to other parts). The world-system has been highly complex for a long time. What is new is that the world-system is now tightly coupled. This is a result of the availability of rapid, worldwide transportation and communication, which makes it possible for a breakdown in any one part of the world-system to spread to all other parts. As technology progresses and globalization grows more pervasive, the world-system becomes ever more complex and more tightly coupled, so that a catastrophic breakdown has to be expected sooner or later. …

There is nothing implausible about the foregoing explanation of the Fermi Paradox if there is a process common to all technologically advanced civilizations that consistently leads them to self-destruction. Here we’ve been arguing that there is such a process.

GD Star Rating
Tagged as:
Trackback URL:
  • Kaczynski is right that these are difficulties. As Prof. Hanson observes, though, those difficulties are not determinative. Unless war becomes even more unprofitable, it seems only a global singleton ‘who gets to reproduce’ order can *completely* clamp down on the ‘rabbit’ strategy. But perhaps in the current style a free floating minority of elites can coordinate to protect their share of global power, surfing a sea of rabbits.

  • Psmith

    If you believe that “individual humans are just happier in a more forager-like world”, predicting collapse is plausibly optimistic, not “low mood.”

  • Anders Sandberg

    The collapse theory seems too weak to explain the Fermi paradox. If one gets a final disaster due to something like a paperclip maximizer AI there is now a propagating system that does not have internal competing dynamics. There is decently big set of such possible ends that produce very visible expanders.

    The fundamental problem here is the same as other civilizational attractor explanations for the paradox (like addictive tech or an introdus into a hidden solid state civilisation): they are the strongest sociological claims ever, since they claim a particular dynamics holds not just for every society, but for every species (no matter how weird), and ever subgroup inside the species (no matter how weird, prepared, or lucky). It is enough that some subgroups survive for the explanation to break.

    • Robin Hanson


      • Anders Sandberg

        I wonder if this is also an argument against Kaczynski’s theory applied to terrestrial societies. If there is a strong tendency for societies to become increasingly complex and tightly coupled it does not mean all parts of the society will be like that. So when disaster strikes those parts would survive. I would assume his rejoinder is that if the process becomes global and strong enough the fraction of such uncoupled parts becomes so small that the implosion gets them too. In space natural separation prevents too extreme dependency, but this might not be true on a planetary scale.

        I think the really interesting question is rather what the distribution of dependency is in an economy where there is competition but also preemptive planning. It is not obvious that it is always smart and efficient to depend on others (e.g. in coding you can try to *never* reinvent the wheel, but sane programmers balance between rolling their own code versus finding and using a library). I assume this is something economics might have something obvious and elementary to say about?

      • Robin Hanson

        I agree that a fear of being caught up in a collapse limits how dependent any one part allows itself to become on the rest, and this limits how far any one collapse can go. And yes eventually being spread far across space will insure a reduced dependence.

    • Blissex

      «It is enough that some subgroups survive for the explanation to break.»

      BTW some scifi/futurologist people of some repute have create an interesting future space civilization overview, “Orion arm”, where the Fermi paradox is central and unexplained; e.g. there are billion-year old scattered artifacts that are obviously the product of very advanced science, but the “Orion arm” is pretty much otherwise empty.

      In this future scenario there are various levels of “hider” communities (1, 2) which try to disconnect from wider society, usually for “prepper” reasons.

  • Joseph Hertzlinger

    The parts of the system on different planets will be loosely coupled. So… this analysis would not apply to a society that has settled more than one planet.

    We just have to survive that long.

  • Robert Koslover

    Heh. “Kaczynski…says… This seems crazy to me.” Well, you know, that’s probably a justifiable conclusion, considering who you are talking about.

  • Blissex

    I tend to agree with the many that agree with Kaczynski, in that he makes several very good points.
    In particular that competition is very short term at high degrees of complexity and I think that this is wildly, delusionally optimistic:

    «most of the large competing systems we know of do in fact pay a
    lot to prepare for rare disasters

    Perhaps it’s because I have both an engineering and accounting mindset, but my impression (and I am not alone) is that that I call “under-depreciation of tail risk” (or “asset stripping”) is both pervasive and extremely profitable in the short term, to the point that power structures in complex societies utterly depend on it. There are various prey-predator models that illustrate the point.
    Put another way, actually existing societies and organizations tend to get stuck in local maxima of optimization landscapes because their internal power structures adapt to and depend on staying in those local maxima, and exploring different regions of the optimization landscape gets “discouraged”.

    What I reckon is that only *religion* can avoid under-depreciation of tail risk and getting stuck in local maxima like that, because religion motivates people to do irrational things like exploring the optimization landscape outside their current local maximum like actually provisioning for rare great catastrophes. Religious people will walk out of their local maximum “because God wills it”, and many will walk into worse parts of the optimization landscape, but some will walk into better ones (e.g. arabs after Mohammed, english non-conformists sailing to America).

    BTW Kaczynski’s argument seem to me to strongly echo the argument by D Landes about progress in Europe, that it depended on there being several distinct political systems, so that there was always *some* part of Europe that was interested in progress, and he makes these examples:

    * When the portuguese elites decided that religious repression was more important than progress and knowledge, the portuguese men of science could emigrate to other european countries that welcomed them.
    * When the emperor of China decided that foreign commerce was destabilizing, the decree was executed across the whole of China, because his authority was universal in that region.

    • Robin Hanson

      All evolving systems get stuck in local maxima. That doesn’t at all suggest that they reliably collapse due to big disasters.

      • Blissex

        But all local maxima eventually disappear, and that’s the inevitable “big disaster”. When the local maximum vanishes, a system that is exquisitely tuned to it will vanish too, a system that isn’t will be more adaptable, generally speaking.
        The problem is that it is usually in the interests of the top layers of the social hierarchy to choose the “exquisitely tuned” option because it maximizes their power.

        This happens within businesses too: when the survival of the business requires changing business model, the existing top layers of the business will resist any change in business model until it is too late, because they know very well that any change in business model undermines their being the top layer.

        The above is more or less the narrative that J Diamond gives in “Collapse” for the deforestation of Easter Island.

        Let’s antromorphize “species”: a species has two strategies: a wide range of diversity among members, which maximizes the ability to survive changes in the environment, or all members carefully optimized for the current environment. A species that chooses the second strategy will outcompete any species that chooses the first.

        Eventually the discriminating factor is speed of change: if it is high/”higher” then “diversity” usually wins, if it is low/”lower” then “optimized” usually wins. The dramatic situation is when the speed of change changes, that is it is low for a long time, and has surges for a while. Then during the low change periods an “optimized” species will wipe out a “diversity” species, and when high change periods happen it will be wiped out itself.

      • Blissex

        «Let’s antromorphize “species”: a species has two strategies:»

        Let’s consider an extreme scenario: humans discover X (could be oil, could be genetic engineering, …) and X represents a really nice local maximum, and every human society, including those in remote andean vlllages or indonesian islands, become dependent on X or exposed on X because it is so amazingly convenient and pervasive. Then if X vanishes or backfires, *everybody* is doomed.

        To some extent Kaczynski’s argument is that progress has a tendency to create and diffuse “technologies” that in the short term are awesomely convenient, so they get widely adopted, but may have long term flaws that become common modes of failure.

      • Robin Hanson

        This argument applies equally to all systems that have ever existed. But since those systems have continue to exist over a long time, Kaczynski knew he needed a new argument, one that only applied to new systems that hadn’t existed before. It is his new argument about new systems that I’m criticizing.

      • Blissex

        «This argument applies equally to all systems that have ever existed. [ … ] It is his new argument about new systems that I’m criticizing.»

        But that’s exactly the point I am trying to address. Your summary of Kaczynski’s position includes:

        That is, things can work fine when bacteria who each move and talk across only meters compete across an entire planet. The failure of one bacteria doesn’t then threaten the planet. But when competing systems become complex and coupled on global scales, then there are always only a few such systems that matter, and breakdowns often have global scopes.

        So my understanding it that we are all agreed that “systems” adapt to local maxima.
        Kaczynski’s point is about a local (in the optimization sense) maximum that is global (in the geographic sense).
        When the local maximum shifts, there can then be global failure.

        Your criticism seems to be that such global failure is going to be planned for by intelligent system designers:

        large competing systems we know of do in fact pay a lot to prepare for rare disasters. Very few correlated disasters are big enough to threaten to completely destroy the whole world.

        which is demonstrated by “The world has had global scale correlation for centuries, with the world economy growing enormously over that time.“.

        In my impression you are arguing that even if a system is global and is tuned to a local maximum, its leaders will increase system costs substantially to prepare for possible catastrophic shifts in that local maximum, and this in fact has happened for centuries.

        Now I’ll make this imaginary example: imagine a planet where temperatures everywhere have been for 10,000 years a constant 21C, and the world economy has been tuned wonderfully for that environment. We should believe that large investments would be made for the possibility of temperatures starting to oscillate between 0C and 40C. I simply think nobody in power would decide that.

        Also I think that *so far* the world has not really had global scale; economies and ecosystems have been largely uncoupled, if only because of the independence of political systems. Global government, coupling on the scale of Imperial China, is simply not yet there.

        Kaczynski is worrying that soon there will be really *global* technological or political systems that will involve a global spreading of common modes of failure, motivated by short-term advantage, and that nobody will want to spend the money to diversify the common modes of failure.

        That is there will be a single large no-longer competing system that will have evolved under competitive pressure so it will have some global advantage coupled with a global mode of failure, and that even after it has defeated all competing systems it will keep the common mode of failure.

        What kind? Well, for example some kind of cumulative or long term poison. Another imaginary example: somebody discovers new very cheap and effective fuel, but has the unknown side effect that in 6 generations users become sterile. A system in which everybody uses it evolves because its users outcompete everybody else, but after 5 generations there is nobody who does not use it, but in another generation all humans are sterile.

        The better argument against Kaczynski to me is not that somebody will ensure that any common mode of failure is eliminated, or that a global system has indeed worked well for centuries, but that it is extremely unlikely that a single global system will arise: there will always be human communities isolated enough to be outside any otherwise global system.

        Except for “poisons” that have a local origin but a global effect, like an unstoppable plague to which nobody has natural immunity, or self-replicating killer robots or nanotech goo.

        What I think Kaczynski worries about is indeed the release of some kind of catastrophic replicator, or long term widely used cumulative poison.

        «correlated disasters are a concern, even when efforts are made to prepare against them. But its just not remotely obvious that competition makes them worse»

        Well, the argument is that efforts are not made to prepare against them because of competitive pressure — the picking pennies before steamrollers issue. And that competitive pressure does not make them worse, but more likely: because competitive pressure tends to drive closer tuning to current local (in in an optimization sense) maxima, making system more vulnerable to shifts in local maxima.

      • brianholtz

        Jared-Diamond-style collapses are irrelevant in the geographic timescale of the Fermi Paradox. All you need is a breeding population of 10K humans and a copy of wikipedia, and the collapse is just a hiccup. Fermi-relevant extinction requires a runaway non-intelligent replicator or a global intelligence-sterilizing environmental catastrophe, and we already have a track record of how rare those things are. Kaczynski’s invocation of the Fermi Paradox seems like hand-waving, in the absence of specific analysis of how technological progress will sterilize the Earth of all intelligent replicators. There are such scenarios to consider, but they don’t support the freight of Kaczynski’s extreme anti-technology case. And as Sandberg points out here, it’s untenable to claim that technological progress always and everywhere leads to self-sterilization of every single intelligence-producing ecosystem.

  • Riothamus

    It doesn’t seem like there is any inherent relationship between how much it costs to prepare for an infrequent problem and the severity of the problem.

    I think the competing systems spending money to prepare for disasters understates the case – entire types of competing subsystems are entirely dedicated to that function, such as insurance and the military.

  • haig

    I agree with the “low mood” assessment, he correctly articulates the limits of fragile systems, but he seems to already have made up his mind about the conclusion (collapse) without considering other (ie anti-fragile) possibilities, which I guess would require a more optimistic and less misanthropic disposition.

  • Pingback: The Land of Confusion | Joyous and Swift()