31 Comments

First, when something becomes visible, your killing it would seem a “public good” act which benefits all species, but mainly costs yours. Your killing action takes up your resources, and risks making you visible to be destroyed by others. Unless you think this new visible thing is especially likely to compete with your siblings, relative to other competitors, you’d rather wait and let something else destroy it.

Unless the killing is done by probes which can't (easily) be traced back to you. (Cf governments etc. hiring assassins, often via further intermediaries.) Maybe self-replicating probes, which thus spread far & wide, making them even harder to trace back to you, and not using up your local resources.

Expand full comment

These are all interesting scenarios, however the physical realities, will keep each civilization, safely confined to their home turf. No Beserker dictator, no matter how evil, will plan a blitzkrieg attack on his neighbors, which must be carried out by his distant relatives, after a 3,054 year journey. The natural human instinct to protect oneself, and his close relatives, (over his unrelated fellow citizens), probably loses its appeal, when projected more than 3 or 4 generations. In other words, there are not many parents, who concern themselves, with their future, grand-children's well-being, in 200 years time, even though we all had great..great...grand-parents 200 years ago. Similarly, our, and other civilizations, individual occupants, will feel little incentive to plan events, when they and their relatives, will not benefit for 370 years.

Expand full comment

Another type of berserker equilibrium which I forget to mention is equilibrium of SETi-attack radio messages.SETI-attack is sending description of AI and computer via radio to naive young civilizations, which will create this alien AI and destroyed by it. After it this AI would send its copies further.Different types of such messages will compete for appearing new civilizations which are the main resource in the Universe. This could happened only if density of civilization is very low (several for a Galaxy, perhaps), because this type of berserker is fastest (it is moving with light speed) and it will arrive in Solar system before any nanotech based material berserkers. See more:"Risks of SETI"http://www.scribd.com/doc/7...

Expand full comment

Berserker ecology could be (and must be) based on nanotech. So nanoberserkers could undetectably exist inside this screen and even brain of the reader.

The main question is what is the trigger that starts the killing program. This trigger must be ahead of us because we still alive. It could be result of observation selection. We could find our self only in such populations of berserkers, whos trigger should react only on technologically high civilizations.

But such trigger can't be arbitrary high. Because then we create our own nanotech we will soon be able to discover nano-bersercers and eliminate them. So the creation of the first nanobot should be very risky event, because it could start the killing program of nano-berserkers.

In order to be not detected by other bersercers they have to show illogical patterns of behavior. Absurdity is the best caumuflage. (This could explain some observations of UFOs as clouds of extraterrestrial nanobots. UFOs is know for absurd behavior.)

Also observation selection lead to the fact that we most likely find our self in the domain of most illogical berserkers, because logical ones would kill us much early.

See more in my article "Ufo as global risk", chapter "Extraterrestrial nanobots" http://www.scribd.com/doc/1...

Expand full comment

TGGP,yes, or if someone is saying "we're probably the only one" instead of "we're probably part of the first, young enough cohort" the distinction should be probabilistically grounded instead of assumed.Also, I'm cautioning here against an assumption that there's not a civilization permanently ahead of us. It may be checkmate in 100 moves instead of in 10 moves.

Expand full comment

So is the "cohort" idea that other similarly advanced species may exist, but they are far enough away and similarly young enough that we have not yet encountered evidence of them?

Expand full comment

Robin says, "I’d love to see (and even help with) attempts to find stable equilibria within computer simulations of such scenarios."

I suggest creating a multi-player online strategy game and crowdsource the simulation. I would play!

Expand full comment

TGGP,Nothing much to it.

"I think the Fermi paradox comes down to only a few alternatives that are consistent with what we see:

- we’re the first species (or perhaps the universe is a simulation where we’re alone) Technological life is very rare."

I've seen this a few times. I don't get the logical leap to "we're THE first/only species", rather than "we're part of the first cohort", unless it's statistically grounded to a model that shows that it's more likely there's just one like us instead of a cohort.

Expand full comment

Re: "I suppose it might be possible for the aliens to be more unified than us, but I think in practice it would evolutionarily unlikely. The more unified you are, the greater the potential evolutionary reward for cheaters and dissidents."

The more unified you are, the greater the power you can collectively assert to prevent any kind of dissent or disharmony. Evolution has a major trend towards bigger organisms, exhibited in many lineages. Were it not for meteroite bombardment, we would all have united long ago.

Expand full comment

Hopefully Anonymous, could you elaborate on the "species cohort" idea? And what berserker precedence there is in earth history?

Expand full comment

You hope so... undetectable that far away with our technology, maybe.

Expand full comment

Hmm... Here's something to throw into the mix. Let's say technological change "shortly" after industrialization because societies reach physical limits of what's possible / feasible with technology in short time horizons (geologically / astronomically speaking). For example, we can imagine and model the "ultimate laptop" even if we can't build one. But if Moore's Law holds up for 300 more years, maybe we'll get close to what's feasible under hard physical limits.

The point is, there may be a rapid transition from us to some end-stage level of intelligence and technology. After that, the physical constraints of the universe only allow incrementally better performance. One can imagine that once you transition to Post-Singuarity, it doesn't really matter if your civilization is 1000 or 1 billion years old, since there's little room left for further improvements or optimizations. After the first 1000 years, you can scale things up, but you can't miniaturize any more or get smarter or more efficient.

I'm not sure at all how this impacts Beserker scenarios, which I think are unlikely. But the current equilibrium state sure seems to favor "quiet and hard to notice" civilizations if they exist at all. Perhaps that means there's a quick convergence to the realization that the best offense and defense is stealth, since if "they" can see you you are really, really easy to kill and other countermeasures are useless. Maybe we don't see Dyson Spheres and the like because they'd be way to vulnerable to vandalism.

Some other points:(1) If you wanted to wipe out rivals, it makes sense to do it well before they reach technological civilization. Otherwise, they can quickly transition into peers (competitors) of like power. Maybe wiping out all life (periodic gamma-ray bursts to sterilize a galaxy or at least ruin complex ecologies???). Maybe some active galaxies have a form of pest control. If we use advanced space telescopes and find no evidence for life in the atmospheres of terrestrial planets in habitable zones (or we don't find terrestrial planets in habitable zones...) maybe the Beserkers have successfully suppressed most life.

(2) Perhaps Beserkers know about Great Filters. They don't bother to wipe out all ecologies, because Great Filters between multicellular life and technological civilizations make it generally not worth the effort, especially if they run a risk in exposing themselves to rival Beserkers. We're lucky to make it through "natural" filters and have evolved technical civilization. Now we're worth some extra risk in destroying soon before we become a peer and harder to destroy. More reason not to shout in the dark with powerful radio beacons.

Fun stuff!

Expand full comment

It's noisy enough out there that our radio transmissions are going to be undetectable that far away.

Expand full comment

As another counter-point, why assume aliens would want to annihilate each other in the first place?

Culture A might spread by destroying its enemies and physically replicating. Culture B might spread by seeding its memes into other cultures, as well as physically replicating.

In our current understanding of the Universe, memes can move much more quickly than weapons because they can move at the speed of light. So we might suspect culture B to be far more successful and wide-spread than culture A.

Of course if culture A was widespread throughout the galaxy, we'd probably have noticed as soon as we discovered radio waves.

Expand full comment

It's one thing to assume that aliens are relativly non violent toward themselves. Nobody knows how they will feel about inferior life forms.

Expand full comment

We do have plenty of beserker precedence in our planet's ecological history although it may be statistically improbable and not ecologically universal (like it would appear to be in our dead light cone).

I could see a beserker stand-off emerging without any planetary intelligence discovering any other planetary intelligence: it would just require "Great Filter" epiphany to emerge reliably enough for our light cone to look dead to us with our current technology and time spent looking. Beyond that I think it's a probabilities question that I'm not competent to address rigorously.

Expand full comment