61 Comments

I rather doubt I’ll be rocketing to Alpha Centauri to build my dream house. Singularity aside, it’s more than likely I’ll be part of mother earth’s compost heap.

Expand full comment

In addition, our shell of AM broadcasting, and even FM broadcasting, is only a century thick. Broadcasts now are shifting to digital formats.

* Television broadcasts are actually a multiplex of several data streams, each of which encapsulates a highly-compressed encoding of video or audio, which is incomprehensible until you know what the codec is.* The AM band is about to go, changing over to DRM (an unfortunate acronym clash; in this context it stands for Digital Radio Mondiale) in the coming decade. Shortwave is already moribund and is going DRM as well.* FM seems to be holding out; the UK tried and failed to popularise DAB. But that's more juicy spectrum for repurposing and they're going to keep trying.* Lots of music and television goes over the Internet now. Lots of it.

We have also changed codecs frequently as we come up with ones that better fit the constraint of limited bandwidth and the availability of a ridiculous surplus of CPU power.

So no, they might detect something in our direction with the spectrum of oddly-coloured noise.

Expand full comment

Here's your filter:

http://www.astrobio.net/pre...

http://sites.bio.indiana.ed...

Put together, the two articles say that the emergence of the Eukaryote was a singular, rare event and that it is necessary both for the creation of a free Oxygen atmosphere and the rise of complex life. In short, we are likely alone because there are no other planets with Oxygen atmosphere, let alone intelligent life.

Look at the bright side. We're past the filter and all of that real estate out there is ours for the taking. You can't beat that with a stick.

Expand full comment

I was making the point that it's not just brain size that matters. A recent paper in Current Biology discusses this in regards to Neanderthals:

http://www.cell.com/current...

There are almost certainly a number of nontrivial morphological brain differences between us and prior hominids.

Of course all this still gets away from the point that hominids are only one genus out of the millions that have existed. If we had gotten wiped out (as we almost were), what would have replaced us?

Expand full comment

Chris the claim is that brains would have done something like us within another few hundred million years. This is claiming intelligence is "easy" from the point of view of a planet over that timescale, not "easy" from the view of a particular species within its lifetime. Noting that Neanderthals didn't do it within a hundred thousand years is hardly relevant.

Expand full comment

Of course quite a few species have big brains (relative to body size), but won't be developing radio anytime soon due to other physical limits (good luck with the flippers, dolphins). Also, cognition is not simply raw computer power, brain region specialization probably plays as big if not bigger role than the size of the brain. ie: Neanderthals were around for about 100 thousand years, had a comparably sized brain, and still failed to accomplish what we have. This even pales in comparison to the vast majority of evolutionary branches which have not been trending towards higher intelligence (only one phlyum out of 70 recognized phyla). Ignoring them would be a form of sampling bias.

The popular view is that civilization was practically inevitable and another species would replace us. However, you would find very few biologists who would lend support to this viewpoint.

Expand full comment

If you haven't read the Brin story, Lungfish, I mentioned above, you might find it useful - in it "berserkers" were a primitive and fairly weak form of intelligent machine. There were many types specifically mentioned, including "police" types that hunted berserkers. But they all fell into two broad categories - pro-life and anti-life, and after a multi-million year war, no one broadcast radio to avoid attracting notice.

Expand full comment

Prof. What's the most reasonable extrapolation from our own planet's history? Who would we be the equivalent at various points in history? At many points I assume an indigenous island community somewhere around the global median in terms of technological ability, but unable to observe other human cultures due to geographic isolation. I agree there doesn't seem to be much beserker precedent in our past, although perhaps slow functional beserker behavior. I don't see much evidence of civilizational hiding. Perhaps subculture hiding? Now that I think about it, cultures engaging in subculture purges in a beserker manner (and preemptive and reactive hiding by these subcultures) may be widely distributed in our history. But populations themselves don't seem to hide much, neither do populations seem to engage in unreflective beserker purging of external populations, unless there's a lack of internal transparency about the organizational motivation.

Expand full comment

This is the classic "berserker" scenario, which I don't find very plausible. You remind me to post on that sometime.

Expand full comment

erik,interesting post with some ideas I hadn't considered before.

Expand full comment

so, surviving civilizations are those that defend against existential risks. their biggest risk is other civilizations, either by direct attack or grey goo accident. we see no evidence of either (even in other galaxies!), suggesting they have been successfully prevented. how might the earliest civilizations have accomplished this? only by suppressing the development of all others, while leaving no evidence to attract attention from any competitors they missed (or tip nascents off as to how to defend against suppression).

a colonization wave would leave evidence. so it may be sensible to limit one's own tendency to colonize, preferring instead the wide dispersal of small automated sterilizing systems to prevent the rise of competitors. this reasoning holds even for powerful post-biological entities that control the resources of say, a star. we can conclude that competitor suppression must be one of the top value priorities of the most powerful agents.

so why have we not been suppressed by a galactic monitoring system? if the system used self-replication to disperse, it would need high replication fidelity *together* with tight limits on replication in order to avoid becoming either evidence or its own grey goo problem. thus, there are pressures against making the system as efficient as possible, as long as it is as efficient as necessary. this explains the fermi paradox and why we are still here (though we probably don't have long to wait). consider that the sterilizers must avoid any possibility of capture -- if they were to be reverse engineered and their safety limits compromised, they would be a potent grey goo threat. so there are likely pockets where the local sterilizer has self-destructed due to tampering. we might be in such a pocket, but should expect that it will not be allowed to persist long enough for us to become a threat.

robin, in your 'faraway wall of galactic colonization' model, you focus only on the wild-fire dynamics of the most valuable consumable resource. if common or durable resources can support civilizations (which seems likely, eg stars), then we should expect to see almost all oases, including nearby, inhabited -- unless something is actively preventing this. potential colonizers are smart enough to understand wild-fires, and likely see little utility in initiating/participating in one. in fact, they must place extreme value on preventing others from starting them, by either purpose or accident.

a variation: perhaps ancient civilizations perform controlled burns, eliminating origin-of-life fuel, in an effort to prevent the rise of competitors. we may be the accumulating ground brush that signals the need for an upcoming purge. could periodic engineered galactic purges explain mass extinctions in the geologic record? is the most common ground-brush-observer-moment immediately prior to a controlled burn? how often should burns be scheduled to reliably prevent the fastest competitor from becoming powerful enough to survive?

Expand full comment

The most dabgeorious UFAI is that use SETI channels so send its copies (that is description of a computer and a programm for it) through Universe. His aim is to use naive civilization as "cosmic common" for resending its copies father.

Such messages should dominate the Universe.

So, we will extinct soon after we find evidence of ET. So this explain why we find iurselves in time period then ET is not found yet.

Expand full comment

If we have allien von Neuman probes in the Solar system , what we expect to observe?

1) the destroy everything - nothing to observe.2) they are nanobots - cant be observable even if they are present in this room or in my brain3) they dug deep into the Moon - nothing to obserbe4) They fly over large sities - nobody belive your evidence about UFO - no real observation

Expand full comment

"... the best survival elements over middle tier subcultures ..."

should read "... the best survival elements OF middle tier subcultures ..."

" I think that means spending resources to stay symmetrically visible to the universe we see as we grow"

should read " I think that means spending resources to stay symmetrically INVISIBLE to the universe we see as we grow"

Expand full comment

The practical attempt at a "perfect game" solution that comes to mind given that we're probably a middle tier average civilization is:(1) We should develop a macro culture that mimics the best survival elements over middle tier subcultures within our civilization. Perhaps we can learn something from traditional successful "middle castes" throughout the world.(2) I lean more in the direction of Prof. Hanson that we should strive to remain invisible while other civilizations are invisible -and we should bet resources in that direction. Perhaps we could win a fool's mate of survival by maximizing growth efficiency and not spending on invisibiltiy, but one shouldn't take risks when it comes to macrocivilizational survival. So we should strive to play a perfect game. I think that means spending resources to stay symmetrically visible to the universe we see as we grow.

Expand full comment

That's why I wrote:

Possible conclusions that can be drawn from the Fermi paradox6 regarding risks associated with superhuman AI versus other potential risks ahead: The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI's with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about. ()What I would like the SIAI to publish

Expand full comment