Many people argue that we should beware of foreigners, and people from other ethnicities. Beware of visiting them, trading with them, talking to them, or allowing them to move here. The fact that so many people are willing to argue for such conclusions is some evidence in favor of them. But the fact that the arguments offered are so diverse, and so often contradict one another, takes away somewhat from the strength of this evidence. This pattern looks like people tend to have a preconceived conclusion for which they opportunistically embrace any random arguments they can find.
Similarly, many argue that we should be wary of future competition, especially if that might lead to concentrations of power. I recently posted on my undergrad law & econ students’ largely incoherent fears of one group taking over the entire solar system, and how Frederick Engels expresses related fears back in 1844. And I’ve argued on this blog with my ex-co-blogger regarding his concerns that if future AI results from competing teams, one team might explode to suddenly take over the world. In this post I’ll describe Ted “Unabomber” Kaczynski’s rather different theory on why we should fear competition leading to concentration, from his recent book Anti Tech Revolution.
Kaczynski claims that the Fermi paradox, i.e., the fact that the universe looks dead everywhere, is explained by the fact that technological civilizations very reliably destroy themselves. When this destruction happens naturally, it is so thorough that no humans could survive. Which is why his huge priority is to find a way to collapse civilization sooner, so that at least some humans survive. Even a huge nuclear war is preferable, as at least some people survive that.
Why must everything collapse? Because, he says, natural-selection-like competition only works when competing entities have scales of transport and talk that are much less than the scale of the entire system within which they compete. That is, things can work fine when bacteria who each move and talk across only meters compete across an entire planet. The failure of one bacteria doesn’t then threaten the planet. But when competing systems become complex and coupled on global scales, then there are always only a few such systems that matter, and breakdowns often have global scopes.
Kaczynski dismisses the possibility that world-spanning competitors might anticipate the possibility of large correlated disasters, and work to reduce their frequency and mitigate their harms. He says that competitors can’t afford to pay any cost to prepare for infrequent problems, as such costs hurt them in the short run. This seems crazy to me, as most of the large competing systems we know of do in fact pay a lot to prepare for rare disasters. Very few correlated disasters are big enough to threaten to completely destroy the whole world. The world has had global scale correlation for centuries, with the world economy growing enormously over that time. And yet we’ve never even seen a factor of two decline, while at least thirty factors of two would be required for a total collapse. And while it should be easy to test Kaczynski’s claim in small complex systems of competitors, I know of no supporting tests.
Yet all dozen of the reviews I read of Kaczynski’s book found his conclusion here to be obviously correct. Which seems to me evidence that a great many people find the worry about future competitors to be so compelling that they endorse most any vaguely plausible supporting argument. Which I see as weak evidence against that worry.
Yes of course correlated disasters are a concern, even when efforts are made to prepare against them. But its just not remotely obvious that competition makes them worse, or that all civilizations are reliably and completely destroyed by big disasters, so much so that we should prefer to start a big nuclear war now that destroys civilization but leaves a few people alive. Surely if we believed his theory a better solution would be to break the world into a dozen mostly isolated regions.
Kaczynski does deserve credit for avoiding common wishful thinking in some of his other discussion. For example, he says that we can’t much control the trajectory of history, both because it is very hard to coordinate on the largest scales, and because it is hard to estimate the long term consequences of many choices. He sees how hard it is for social movements to actually achieve anything substantial. He notes that futurists who expect to achieve immortality and then live for a thousand years too easily presume that a fast changing competitive world will still have need for them. And while I didn’t see him actually say it, I expect he’s the sort of person who’d make the reasonable argument that individual humans are just happier in a more forager-like world.
Kaczynski isn’t stupid, and he’s more clear-headed than most futurists I read. Too bad his low mood leans him so strongly to embrace a poorly-argued inevitable collapse story.
Some book quotes on his key claim: Continue reading "Kaczynski’s Collapse Theory" »
loading...