I’m a big fan of Nick Bostrom; he is way better than almost all other future analysts I’ve seen. He thinks carefully and writes well. A consistent theme of Bostrom’s over the years has been to point out future problems where more governance could help. His latest paper,
"the TLI per dollar makes AK47 and nukes on par, and the heavier weapons require big support teams"
After the soviet union collapsed, there were massive numbers of nuclear weapons in eastern europe left completely unguarded.
It may have cost a lot to build them and have required large support teams to maintain them, but thieves dont pay for anybof that
I am leery of any argument in favor of increased political and economic centralization.
We should check theory against data whenever we can.
Yes sometimes there are first-attacker advantages, and then people attack first.
How do you deter someone who's willing to die as long as he takes you down with him? MAD only works as long as mutual coexistence is preferable to mutual destruction.
Wouldn't mutual assured destruction work on the individual and small group level as well? I have a gun and you have a gun, therefore, my probability of getting killed if I were to pull a gun on you is higher than if I keep mine in its holster. Seems to me that would continue to be part of small inimical groups cost benefit analysis of potential actions given even more readily available destructive capabilities, since those capabilities would be available to counter groups, too.
I think the conclusion here is that we have to depend more on theory or abstract reasoning than empirical evidence to figure out what to do. (Which I wanted to point out because it seems like you were emphasizing past trends a lot in your post and the addendum.)
One can make plausible selection arguments for why disasters that call for more governance seem less common than they actually are. But one can also make similar arguments for why disasters that call for less governance also seem less common than they are.
If there are lots of random technologies left to be discovered, it seems like there's a high risk that at least one of them will let an individual or small group of people destroy the world or civilization. Anthropic selection would make sure that no such technology exist in our world in the past, but why wouldn't such a technology exist in our future?
More speculatively, anthropic selection could also explain the lack of a trend towards greater individual or small group lethality - it could be that a world which contains technologies that let an individual kill thousands of people would have evolved social structures very different from ours to prevent such killings, and those social structures would reliably cause civilizational stagnation or collapse.
A good fictional fragile world story is "Solution Unsatisfactory" by Robert Heinlein. As best the reader can tell, the central character has at each point made the right decision--and it's pretty clear that catastrophe has only been delayed.