21 Comments

tl;dr: if you're discussing human extinction, TK's written opinions aren't relevant. he's only concerned with civilization collapse for the purpose of preserving humanity.---I'm re-reading what you've written and I believe there may be a straw-man element to your argument.

I don't think TK has ever argued that humanity will face an extinction event - or to be precise, that we are capable of avoiding massive-scale events such as an unanticipated large object impact with the Earth. What TK discussed was the dangers inherent in civilization that could mold humanity into something we do not recognize as human. To that end, I would venture that his goal is a reduction of the population on the order of 4 to 6 orders of magnitude, which I he seems to think would collapse civilization as we know it, while preserving humanity as we know it.

Without the context "TK wants to wreck civilization to save humanity" included in any interpretation of his writing, discussions can blindly lead to strange conclusions and extrapolations regarding his opinions. For instance, in quoting his contemplation of the Fermi Paradox, I don't believe TK concludes that all life int he galaxy has died due to a system-wide collapse we can yet contemplate, but due to some as-yet unforeseen consequence of the evolution of life in concert with the kinds of civilization systems he opposed.

His thesis is not that tech kills man, per se, but that tech makes man something different, which is then more fragile and subject to extinction - but in the definition sense of humanity-as-we-know-it, man is long gone by the time the final extinction event occurs.

Expand full comment

I am talking about human extinction. An economic collapse would be bad for those who suffered it, but humanity would continue and revive soon on a cosmological timescale.

Expand full comment

"The world has had global scale correlation for centuries, with the world economy growing enormously over that time. And yet we’ve never even seen a factor of two decline, while at least thirty factors of two would be required for a total collapse."

Pardon my late comment.

It appears a foolish to propose that "30 factors of two" decline "would be required for a total collapse." I can only assume you mean decline of population, in which case I think a reduction from the current population of 7.6 billion to a classical population of about 300 million would be sufficient to collapse most modern hierarchy and modern industrial production. That's about 4.66 factors of two, not 30. Thirty factors of two reduction in population would result in about 7 or 8 human beings - practically speaking, one family.

Are you proposing a Biblical flood is required to eradicate civilization as we know it?

Expand full comment

Jared-Diamond-style collapses are irrelevant in the geographic timescale of the Fermi Paradox. All you need is a breeding population of 10K humans and a copy of wikipedia, and the collapse is just a hiccup. Fermi-relevant extinction requires a runaway non-intelligent replicator or a global intelligence-sterilizing environmental catastrophe, and we already have a track record of how rare those things are. Kaczynski's invocation of the Fermi Paradox seems like hand-waving, in the absence of specific analysis of how technological progress will sterilize the Earth of all intelligent replicators. There are such scenarios to consider, but they don't support the freight of Kaczynski's extreme anti-technology case. And as Sandberg points out here, it's untenable to claim that technological progress always and everywhere leads to self-sterilization of every single intelligence-producing ecosystem.

Expand full comment

I agree with the "low mood" assessment, he correctly articulates the limits of fragile systems, but he seems to already have made up his mind about the conclusion (collapse) without considering other (ie anti-fragile) possibilities, which I guess would require a more optimistic and less misanthropic disposition.

Expand full comment

It doesn't seem like there is any inherent relationship between how much it costs to prepare for an infrequent problem and the severity of the problem.

I think the competing systems spending money to prepare for disasters understates the case - entire types of competing subsystems are entirely dedicated to that function, such as insurance and the military.

Expand full comment

«This argument applies equally to all systems that have ever existed. [ ... ] It is his new argument about new systems that I'm criticizing.»

But that's exactly the point I am trying to address. Your summary of Kaczynski's position includes:

“That is, things can work fine when bacteria who each move and talk across only meters compete across an entire planet. The failure of one bacteria doesn’t then threaten the planet. But when competing systems become complex and coupled on global scales, then there are always only a few such systems that matter, and breakdowns often have global scopes.”

So my understanding it that we are all agreed that "systems" adapt to local maxima.Kaczynski's point is about a local (in the optimization sense) maximum that is global (in the geographic sense).When the local maximum shifts, there can then be global failure.

Your criticism seems to be that such global failure is going to be planned for by intelligent system designers:

“large competing systems we know of do in fact pay a lot to prepare for rare disasters. Very few correlated disasters are big enough to threaten to completely destroy the whole world.”

which is demonstrated by “The world has had global scale correlation for centuries, with the world economy growing enormously over that time.“.

In my impression you are arguing that even if a system is global and is tuned to a local maximum, its leaders will increase system costs substantially to prepare for possible catastrophic shifts in that local maximum, and this in fact has happened for centuries.

Now I'll make this imaginary example: imagine a planet where temperatures everywhere have been for 10,000 years a constant 21C, and the world economy has been tuned wonderfully for that environment. We should believe that large investments would be made for the possibility of temperatures starting to oscillate between 0C and 40C. I simply think nobody in power would decide that.

Also I think that *so far* the world has not really had global scale; economies and ecosystems have been largely uncoupled, if only because of the independence of political systems. Global government, coupling on the scale of Imperial China, is simply not yet there.

Kaczynski is worrying that soon there will be really *global* technological or political systems that will involve a global spreading of common modes of failure, motivated by short-term advantage, and that nobody will want to spend the money to diversify the common modes of failure.

That is there will be a single large no-longer competing system that will have evolved under competitive pressure so it will have some global advantage coupled with a global mode of failure, and that even after it has defeated all competing systems it will keep the common mode of failure.

What kind? Well, for example some kind of cumulative or long term poison. Another imaginary example: somebody discovers new very cheap and effective fuel, but has the unknown side effect that in 6 generations users become sterile. A system in which everybody uses it evolves because its users outcompete everybody else, but after 5 generations there is nobody who does not use it, but in another generation all humans are sterile.

The better argument against Kaczynski to me is not that somebody will ensure that any common mode of failure is eliminated, or that a global system has indeed worked well for centuries, but that it is extremely unlikely that a single global system will arise: there will always be human communities isolated enough to be outside any otherwise global system.

Except for "poisons" that have a local origin but a global effect, like an unstoppable plague to which nobody has natural immunity, or self-replicating killer robots or nanotech goo.

What I think Kaczynski worries about is indeed the release of some kind of catastrophic replicator, or long term widely used cumulative poison.

«correlated disasters are a concern, even when efforts are made to prepare against them. But its just not remotely obvious that competition makes them worse»

Well, the argument is that efforts are not made to prepare against them because of competitive pressure -- the picking pennies before steamrollers issue. And that competitive pressure does not make them worse, but more likely: because competitive pressure tends to drive closer tuning to current local (in in an optimization sense) maxima, making system more vulnerable to shifts in local maxima.

Expand full comment

This argument applies equally to all systems that have ever existed. But since those systems have continue to exist over a long time, Kaczynski knew he needed a new argument, one that only applied to new systems that hadn't existed before. It is his new argument about new systems that I'm criticizing.

Expand full comment

«Let's antromorphize "species": a species has two strategies:»

Let's consider an extreme scenario: humans discover X (could be oil, could be genetic engineering, ...) and X represents a really nice local maximum, and every human society, including those in remote andean vlllages or indonesian islands, become dependent on X or exposed on X because it is so amazingly convenient and pervasive. Then if X vanishes or backfires, *everybody* is doomed.

To some extent Kaczynski's argument is that progress has a tendency to create and diffuse "technologies" that in the short term are awesomely convenient, so they get widely adopted, but may have long term flaws that become common modes of failure.

Expand full comment

But all local maxima eventually disappear, and that's the inevitable "big disaster". When the local maximum vanishes, a system that is exquisitely tuned to it will vanish too, a system that isn't will be more adaptable, generally speaking.The problem is that it is usually in the interests of the top layers of the social hierarchy to choose the "exquisitely tuned" option because it maximizes their power.

This happens within businesses too: when the survival of the business requires changing business model, the existing top layers of the business will resist any change in business model until it is too late, because they know very well that any change in business model undermines their being the top layer.

The above is more or less the narrative that J Diamond gives in "Collapse" for the deforestation of Easter Island.

Let's antromorphize "species": a species has two strategies: a wide range of diversity among members, which maximizes the ability to survive changes in the environment, or all members carefully optimized for the current environment. A species that chooses the second strategy will outcompete any species that chooses the first.

Eventually the discriminating factor is speed of change: if it is high/"higher" then "diversity" usually wins, if it is low/"lower" then "optimized" usually wins. The dramatic situation is when the speed of change changes, that is it is low for a long time, and has surges for a while. Then during the low change periods an "optimized" species will wipe out a "diversity" species, and when high change periods happen it will be wiped out itself.

Expand full comment

All evolving systems get stuck in local maxima. That doesn't at all suggest that they reliably collapse due to big disasters.

Expand full comment

«It is enough that some subgroups survive for the explanation to break.»

BTW some scifi/futurologist people of some repute have create an interesting future space civilization overview, "Orion arm", where the Fermi paradox is central and unexplained; e.g. there are billion-year old scattered artifacts that are obviously the product of very advanced science, but the "Orion arm" is pretty much otherwise empty.

In this future scenario there are various levels of "hider" communities (link 1, link 2) which try to disconnect from wider society, usually for "prepper" reasons.

Expand full comment

I tend to agree with the many that agree with Kaczynski, in that he makes several very good points.In particular that competition is very short term at high degrees of complexity and I think that this is wildly, delusionally optimistic:

«most of the large competing systems we know of do in fact pay a lot to prepare for rare disasters»

Perhaps it's because I have both an engineering and accounting mindset, but my impression (and I am not alone) is that that I call "under-depreciation of tail risk" (or "asset stripping") is both pervasive and extremely profitable in the short term, to the point that power structures in complex societies utterly depend on it. There are various prey-predator models that illustrate the point.Put another way, actually existing societies and organizations tend to get stuck in local maxima of optimization landscapes because their internal power structures adapt to and depend on staying in those local maxima, and exploring different regions of the optimization landscape gets "discouraged".

What I reckon is that only *religion* can avoid under-depreciation of tail risk and getting stuck in local maxima like that, because religion motivates people to do irrational things like exploring the optimization landscape outside their current local maximum like actually provisioning for rare great catastrophes. Religious people will walk out of their local maximum "because God wills it", and many will walk into worse parts of the optimization landscape, but some will walk into better ones (e.g. arabs after Mohammed, english non-conformists sailing to America).

BTW Kaczynski's argument seem to me to strongly echo the argument by D Landes about progress in Europe, that it depended on there being several distinct political systems, so that there was always *some* part of Europe that was interested in progress, and he makes these examples:

* When the portuguese elites decided that religious repression was more important than progress and knowledge, the portuguese men of science could emigrate to other european countries that welcomed them.* When the emperor of China decided that foreign commerce was destabilizing, the decree was executed across the whole of China, because his authority was universal in that region.

Expand full comment

Heh. "Kaczynski...says... This seems crazy to me." Well, you know, that's probably a justifiable conclusion, considering who you are talking about.

Expand full comment

The parts of the system on different planets will be loosely coupled. So... this analysis would not apply to a society that has settled more than one planet.

We just have to survive that long.

Expand full comment

I agree that a fear of being caught up in a collapse limits how dependent any one part allows itself to become on the rest, and this limits how far any one collapse can go. And yes eventually being spread far across space will insure a reduced dependence.

Expand full comment