43 Comments

But that stuff has the very real chance to *itself* evolve into a threat while pre-equipped with their high tech level. I mean which is more likely to evolve into a threat. Pure chemical goop that's not yet alive or your replicating/repairing machine watchdogs/exterminators? I guess the later.

Doubtful you can build any single machine that will function correctly without repair or replacation after a billion or even a few million years and on that time scale even much less noisy processes than DNA copying are likely to have duplication/repair errors that risk changing their behavior especially for complex machines which must be capable of making all the intelligent choices to act with minimal direction.

So I still think you need extensive communication links merely to ensure they catch malfunctioning machines.

Expand full comment

You spent half of your post on this, but I did not get the point until I saw this comment. It would be useful to add this comment as an addendum.

Expand full comment

Talking about SF there is a short story with these themes, there supposedly is two kinds of civs "Seekers and Spreaders" unfortunately I can't remember the name right now. Also remember an author referring to civilization as a mass of replicators, but ones that are polite and does it slowly.

Expand full comment

They could be advanced AIs, but probably not replicators. It doesn't seem far fetched for a civilization to want to send out its prized tech to hopeful star systems. Our golden record was offered to the universe in this spirit.

The hypothesis is essentially just N is very small, but not 1. A few civilizations managed to send out a good number of probes.

Expand full comment

If we look at their behaviour, they are remnants of an alien Disneyland run amok.

Expand full comment

Nanotechnological panspermia is possible via the same ways as panspermia for life. Nanobots could survive in rocks and travel between star systems.

If a civilisation had created nanobots and then went extinct, such nanobots will eventually inseminate the whole galaxy.

If they are grey-goo-style thing, we can't observe them, but if they are slow replicating things, they can live almost unobservable everywhere.

Expand full comment

So UFOs aren't aliens but just random alien tech demos flying around?

Expand full comment

I think you misunderstand the usual Big Bang theory.

Expand full comment

Slight twist perhaps. What if the standard theory, Big Bang, was not really correct. No central point of expansion containing all the mater in the universe existed. The alternative being the universe originated via some type of, call it precipitation of mater from energy in a much more distributed manner. In that setting looking at the relative speeds of distant objects really doesn't tell us how old they are.

Given a origin of the universe like that, what is the assessment of what level of development by any aliens might be expected? How might one change both their thinking about finding such aliens and what risks/benefits they might hold for humans and human civilization?

Expand full comment

He called that the Katechon Hypothesis. It's pretty cool, actually https://www.unz.com/akarlin...

Expand full comment

I'm not sure if this meets the definition of "aliens", but I think my hypothesis would be more along the lines there being some civilization (probably extremely distant) that was not advanced enough to seed the galaxy with life, but decided it wanted to blast out its technology. It decided our planet had a good chance at spawning life, so sent a couple here. Incidentally, this fits best with our current "evidence" of aliens, i.e. some unexplainable technology.

This places the Great Filter right about where we are, but a few civilizations occasionally manage to chuck a decent amount of intelligent mass into the universe before failing.

Expand full comment

Also the Spican civilization in Pushing Ice. Reynolds is a fan of this solution to the Fermi paradox.

Expand full comment

Huh? I don't presume we live in a simulation. And in ordinary lands, expansion reduces existential risk.

Expand full comment

If at a sufficiently high level of collective intelligence one can deduce that interstellar expansion poses serious existential risks (e.g. by exceeding the simulation's capacity within a certain region of space-time given a high enough concentration of intelligent agents within it), then such a policy would be emerge universally. Exceptions having been eliminated by wiser xenos (violently or otherwise), or by the Architect.

Expand full comment

I'd like to suggest a thought experiment along a different guideline.

Suppose we master genetic engineering to the point every future human being beginning from, let's say, 500 years in the future, will be born with:

a) An average QI 200 (baseline 140, upper bound 260) relative to today's 100;

b) 100% potential of achieving the highest stages on the different Neopiagetian developmental psychometric scales such as, e.g., Loevinger's ego development, Kolhberg's moral reasoning and Fowler's faith ones, therefore with perfect gratification delay, by no later than the optimal point of their 35th birthday;

c) No one, biological or EM, dies from anything anymore, except for extraordinarily rare accidents, meaning most people alive will have more life experience, and therefore pragmatism, than the sagest of current day octogenarians;

d) EMs, if they exist in this scenario, come with all of the above values already maximized from the get go;

e) Any other species are likely to do the same to themselves, according their own psychological profiles, however alien those may be.

There's likely nothing in the past that meets these criteria, and even nowadays there's probably less than one in a billion who fits the first two points, but these are very likely developments given all the genetic engineering developments we're seeing. Therefore, while inferring what a society built by people with these characteristics would be like may be strictly impossible, it may be a valid exercise in trying to figure out how much of our previous data may still be useful, or not, in trying to infer future developments.

Or, to put it another way: when literally everyone can see, understand, and act in light of a properly understood, by them all, Big Picture, and there are literally no sociopaths and psychopaths anymore to focus their efforts in trying to acquire utility for themselves at the expense of others, would a notion such as that of "central control" make sense, and be needed?

Expand full comment

Obviously we never have data directly on the future; we always infer the future from the past. That said, we have a lot of data on species and cultures and their tendency to expand. A mere inclination of many to not expand is far from sufficient to prevent expansion. That's why I posited a strong central control to prevent expansion.

Expand full comment