Beware Future Filters

Though we can now see over 1020 stars that are billions of years old, none has ever birthed a visible interstellar civilization. So there is a great filter at least that big preventing a simple dead star from giving rise to visible colonization within billions of years. (This filter is even bigger given panspermia.) We aren’t sure where this filter lies, but if even 10% (logarithmically) of it still lies in our star’s future, we have less than a 1% chance of birthing a wave. If so, either we are >99% likely to always forever more try to and succeed in stopping any capable colonists from leaving here to start a visible colonization wave, if given such a choice, or we face poor odds of surviving to have such a choice.

Back in March I noted that Katja Grace had an important if depressing insight:

Back in ‘98 I considered the “doomsday argument” … [but] instead embraced “self-indication analysis”, which blocks the usual doomsday argument.  In ‘08 I even suggested self-indication helps explain time-asymmetry. … Alas, Katja Grace had just shown that, given a great filter, self-indication implies doom!  This is the great filter … Alas I now drastically increase my estimate of our existential risk; I am, for example, now far more eager to improve our refuges.

Katja has just finished her undergrad honors thesis at ANU, which reports that all three of the main ways to pick a prior re indexical uncertainty (on who am I in this universe) imply that future filters are bigger than we’d otherwise think.  And not just by small amounts – the bigger the filters, the bigger the boost to future filters.

Now existential risk is important even if its odds are low – so much is at stake in whether our descendants die out or colonize a big chuck of the visible universe. But the bigger the odds, the more important it gets. Let’s review the main ways to estimate existential risk:

  1. Inside Model – using an internal model of how a particular risk process works, use your best guesses on likely model parameters to estimate the chance this process happens.
  2. Outside Scaling – Use prior rates of smaller events similar to a particular risk, and how such rates scale with size, to estimate the chance of events so big as to be a filter.
  3. Doomsday Argument – Assuming self-sampling and a reference class, estimate the chance of doom soon based on our time order in the reference class.
  4. Great Filter – Using estimates of total filter size and the chances of prior filters of various sizes, to estimate distributions over the total future filter size.
  5. Indexical Filter Boost – Redo the great filter analysis given all the main ways to get indexical priors, and weigh answers accordingly.

Now while many folks use approach #1 to estimate big chances of particular dooms, most such “models” have little formal structure; they are mostly vague intuitions.  So this approach usually influences my opinions rather weakly. Approach #2 is pretty solid, but usually leads to pretty low estimates. Using this approach, war and pandemics seem most likely to destroy half of humanity, but not very likely, and the odds of destroying us all see much lower. Approach #3 gets some weight, but less for me as I find self-sampling pretty implausible relative to self-indication.

This leaves #4, #5 as the main reasons I worry about existential risk. So having to take #5 seriously in addition to #4 is quite a blow. There is some tension between this and the results of #2, so I must wonder: what big things future could go wrong where analogous smaller past things can’t go wrong? Many of you will say “unfriendly AI” but as Katja points out a powerful unfriendly AI that would make a visible mark on the universe can’t be part a future filter; we’d see the paperclips out there.  Neither would the risk that our descendants’ values diverge from ours, nor  the risk of a rapidly expanding wave of (nanotech) grey goo – only slowly spreading grey goo could count in the future filter.

Browsing Nick Bostrom’s survey, that leaves us with: weak grey goo, engineered pandemics, sudden extreme climate change, nuclear war, totalitarianism ends growth, and unfriendly aliens. While these all risks seem apriori unlikely, either the entire great filter is in our past, or one of these (or something not listed) is far worse than it seems. But which?

Also, how likely is it really that such events would destroy all advanced life on Earth, to prevent other primates or mammals from recreating intelligence? After all the fact that human level intelligence arose so soon after human size brains appeared suggests that it was not a past filter of ours. The most likely resolution of all this still seems to me that almost all the filter is in our past, perhaps at the origin of life. But I’m not willing to bet our future on that.

The good news is that refuges seem effective against most these risks.  While unfriendly aliens mights dig us out of any holes, and prevent other Earth life from re-evolving intelligence, the other risks aren’t intelligent enough for that.  So: let’s make more and better refuges, and for #$@&* sake please stop broadcasting to aliens!

Added 10a: Refuges would also not protect much against totalitarian world culture and/or government that stops growth. So let’s also try extra hard to avoid that too.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://mengbomin.wordpress.com/ Meng Bomin

    We may be able to see 1020 stars in the observable universe, but it’s difficult to surmise how we would even begin to detect signs of civilization around any star outside the Milky Way and its satellite galaxies (the most significant of these being the LMC and SMC). The Milky Way itself has 8-9 orders of magnitude less stars than that, so a substantial part of the filter would seem to be that occurrences in other galaxies are difficult to resolve with current technology and even harder to access (if you think that an interstellar voyage would be difficult, imagine an intergalactic voyage).

  • Chris T

    and for #$@&* sake please stop broadcasting to aliens!

    We’re not; the vast majority of Earth originated EM radiation becomes indistinguishable from background radiation within a few light years. It takes a fairly tight broadcast at high power to actually communicate across interstellar distances.

    The only means of detection we’ve had until recently, other than stumbling across something in our Solar System, has been listening for radio signals. The chances of actually intercepting one not intended for us are dismally low.

    • http://rocknerd.co.uk David Gerard

      In addition, our shell of AM broadcasting, and even FM broadcasting, is only a century thick. Broadcasts now are shifting to digital formats.

      * Television broadcasts are actually a multiplex of several data streams, each of which encapsulates a highly-compressed encoding of video or audio, which is incomprehensible until you know what the codec is.
      * The AM band is about to go, changing over to DRM (an unfortunate acronym clash; in this context it stands for Digital Radio Mondiale) in the coming decade. Shortwave is already moribund and is going DRM as well.
      * FM seems to be holding out; the UK tried and failed to popularise DAB. But that’s more juicy spectrum for repurposing and they’re going to keep trying.
      * Lots of music and television goes over the Internet now. Lots of it.

      We have also changed codecs frequently as we come up with ones that better fit the constraint of limited bandwidth and the availability of a ridiculous surplus of CPU power.

      So no, they might detect something in our direction with the spectrum of oddly-coloured noise.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    “So: let’s make more and better refuges, and for #$@&* sake please stop broadcasting to aliens!”

    This jumps out at me. I think it’s quite unlikely the great filter is in our past, because our past is so short.

    What seerms salient to me (although it may be commission bias) is the possibility that something not detectible to us is exterminating civilizations when they become visible to it but before they become visible to our cruder detection technology. Consider it the “all-’hitmen’-are-fbi-agents” theory, because it seems every person who attempts to hire a hitman ends up busted contacting an undercover fbi agent.

    If one could freeze dangerous technological innovations (variations of grey goo) it seems to me quite possible for a civilation at our level to become a noisy galactic civilization in under 1 billion additional years. We have interstellar civilization technology, except perhaps for the social equilibrium technology.

    I agree with the paranoid idea that we should be very, very, very worried that we don’t see anything like us in a visible universe filled with things that look like everything else. It’s not an indication of being in a safe location for things like us.

    • Chris T

      This jumps out at me. I think it’s quite unlikely the great filter is in our past, because our past is so short.

      ~3.5 billion years is short? The filter also covers the likelihood an intelligent species arises at all.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        I’m no expert, but it seems short enough to me for a starting presumption of a lot of intelligent life in the universe.

      • Chris T

        Actually, I would come to the opposite conclusion.

        Life seems to have appeared fairly early in Earth’s history. From this, it looks like life is fairly likely given the right conditions. However, after life appeared the odds start to look much worse. It took close to 2 billion years for eukaryotes to appear and close to another billion for multi-cellular organisms to develop. From there it was another 600 million for a species even capable of developing radio or space travel to appear.

        Think about that: out of all of the species that have ever existed over 3.5 billion years, only one capable of radio communication has ever existed to our knowledge.

      • http://hanson.gmu.edu Robin Hanson

        Max (Log) brain size increased relatively steadily for those 600 million years, and big brains seen necessary for human level intelligence. Intelligence doesn’t look like a filter.

      • Anonymous from UK

        Have you just made a necessary/sufficient error, Robin?

        > big brains seen necessary for human level intelligence. Intelligence doesn’t look like a filter.

      • Chris T

        Of course quite a few species have big brains (relative to body size), but won’t be developing radio anytime soon due to other physical limits (good luck with the flippers, dolphins). Also, cognition is not simply raw computer power, brain region specialization probably plays as big if not bigger role than the size of the brain. ie: Neanderthals were around for about 100 thousand years, had a comparably sized brain, and still failed to accomplish what we have. This even pales in comparison to the vast majority of evolutionary branches which have not been trending towards higher intelligence (only one phlyum out of 70 recognized phyla). Ignoring them would be a form of sampling bias.

        The popular view is that civilization was practically inevitable and another species would replace us. However, you would find very few biologists who would lend support to this viewpoint.

      • http://hanson.gmu.edu Robin Hanson

        Chris the claim is that brains would have done something like us within another few hundred million years. This is claiming intelligence is “easy” from the point of view of a planet over that timescale, not “easy” from the view of a particular species within its lifetime. Noting that Neanderthals didn’t do it within a hundred thousand years is hardly relevant.

      • Chris T

        I was making the point that it’s not just brain size that matters. A recent paper in Current Biology discusses this in regards to Neanderthals:

        http://www.cell.com/current-biology/abstract/S0960-9822%2810%2901282-0

        There are almost certainly a number of nontrivial morphological brain differences between us and prior hominids.

        Of course all this still gets away from the point that hominids are only one genus out of the millions that have existed. If we had gotten wiped out (as we almost were), what would have replaced us?

    • Randall Randall

      “I think it’s quite unlikely the great filter is in our past, because our past is so short.”

      If we get filtered, our future will be vastly shorter, since anything that let’s us grow for even another million years would render any filtering a galaxy-wide event.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        I don’t think we’re covering any new ground here, beyond what Prof. Bostrom and types like him have summarized.

      • Randall Randall

        It seems obviously wrong that (as stated by someone else who replied) 3.5 billion years is short while the time until our spreading would be visible from galactic distances is large. I don’t believe I’ve seen anyone seriously suggest that our civilization could take billions of years to spread across the galaxy… either it takes mere millions, or we got filtered.

  • http://williambswift.blogspot.com/ billswift

    There have been many places in prehistory where the evolution of complex life, much less intelligent life, could have been halted. See Ward and Brownlee, Rare Earth, for an decent summary.

    As for something wiping out intelligences, a good story is David Brin’s “Lungfish” in his The River of Time collection.

  • Michael Kirkland

    Wouldn’t unfriendly aliens that can reach us necessarily sprout colony waves of their own?

    Maybe the great filter is that species become paranoid about hostile aliens and hide in their own solar system.

  • Max M

    So you’re describing Fermi’s Paradox as the prior? It seems to me like there are many good solutions that don’t involve conceding the fact that aliens are that unlikely. I prefer the human zoo hypothesis myself.

  • Metacognition

    The filter might be in what we see. Once there is one interstellar civilization that civilization might find it appropriate to cloak itself and others.

  • http://hanson.gmu.edu Robin Hanson

    Meng, imagine a civ’s appearance 100MY after it starts.

    Chris, I wasn’t talking about “the vast majority of Earth originated EM radiation”; I was talking about transmissions on purpose.

    Bill, yes as I said the filter being behind us is the most likely outcome. But how much would you gamble our future on that presumption?

    Max, unless we are surrounded by a screen that lies about the larger universe’s appearance, why do they leave it all looking so dead out there?

    • http://mengbomin.wordpress.com/ Meng Bomin

      I imagine that the technology developed would be quite advanced. However, I’m not sure that such technology would allow for intergalactic chasm crossing. I look at the prospect of interstellar voyages with awe toward the difficulties involved: transporting a self-sustaining living population through cosmic ray littered space (cosmic ray intensity jumps as one leaves the heliosphere), with all energy to be used packed beforehand (in the cold of interstellar space, there’s no energy input), aiming to colonize a planet or moon likely to be much less hospitable than the home world and those are just some of the difficulties.

      Now look at the prospect of an intergalactic voyage. M31 is ~600,000 times further away that α Cen. You have much less information available for guidance (we have no clue what M31′s transverse velocity is relative to us…we can only measure whether it’s coming toward us or moving away and picking specific stars with possible platforms for life…if you can’t even tell where the galaxy will be when you get there…).

      Then of course, you have the problem that the distance compounds your problems non-linearly. Not only will you be exposed to cosmic radiation for a longer duration and in all likelihood, at higher intensity. Your energy supply needs to last so many times longer. Imagine you’re moving at 0.1c…you have to make sure that the life aboard your craft lasts for 25 million years…what do you think our ancestors looked like 25 million years ago (hint: we diverged from chimps on the order of 5 million years ago).

      So, I suspect that the vast majority of the (very expensive) crafts sent out would likely end up poorly aimed and the life on board would be fried goo after millions of years of intense cosmic ray bombardment.

      Of course, given that there are billions of stars in our own galaxy to colonize, why would any intelligent civilization put forth such enormous resources at such ridiculously high risk, when they can expand within our own galaxy for a price millions of times (if not billions, trillions, or quadrillions of times) cheaper with orders of magnitude higher chances of success?

      I think that it’s entirely rational to treat galaxies as “island universes” when it comes to the strategic landscape available to intelligent civilizations.

      • Ryan S

        Things may be much different when you consider computerized lifeforms (Robin’s “ems”) etc.

      • http://mengbomin.wordpress.com/ Meng Bomin

        Perhaps, but simply because the cargo is non-living does not grant it immunity to cosmic radiation or free it from risk of malfunction. No human machine that I know of has been known to still function after 25 million years, much less perpetuate the imprint of an advanced civilization in a distant galaxy.

      • http://hanson.gmu.edu Robin Hanson

        Life survives for many millions of years, not by creating perfect creatures that can’t be damaged, but by constantly remaking new creatures, and selecting the ones that live best. That same approach can be used in inter-galactic colonization.

    • http://williambswift.blogspot.com/ billswift

      >But how much would you gamble our future on that presumption?

      Very little. I have been a survivalist since I was a teenager in the late 1970s. I strongly support the idea of refuges. One of my gripes though with many people who talk about “existential threats” though is that they seem to think trying to reduce threats by slowing down scientific research and think using the idiotic “precautionary principle” is a good idea.

      We are at greater risk of some catastrophe, the weaker we are. Right now, despite films like Armageddon and Deep Impact, we are completely vulnerable to a major impact event, for example.

      Some risks, like an out of control bioweapon or, in the not too distant future, unfriendly AI or nanoweapons, are increased by more knowledge, but trying to restrict knowledge would require a tyranny that makes the Soviets look liberal, will destroying the world economy, and still may not eliminate the danger since some “black” research programs will likely keep going. The least bad way I have been able to think of to reduce these types of risks is to try to maintain some equality between multiple research teams, so no one has too much of a lead on the others.

      • http://williambswift.blogspot.com/ billswift

        Too many editing errors. I swear that the next time I post anything more than a paragraph in a blog without an editing feature, I am going to write it in Notepad and cut and paste rather than trying to use that little window.

  • Doc Merlin

    2 other possibilities than a filter:

    The start-off of evolution is far far more rare than we expect.

    We didn’t evolve but were created by a something outside the universe. (This includes the possibility that we are in a simulation).

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      Doc, the first of those is a filter, the second I think is consensus reality for those that discuss it, but is usually left off as shorthand.

      I’d like a good bayesian estimate of how many simulations deep we’re in (I think we can start with the distribution of our own simulations as a prior).

      But saying we’re a simulation within N simulations doesn’t change much in terms of rational persistence optimization, it seems to me.

      • Salem

        And I’d like a good Bayesian estimate of how many turtles the universe is resting on.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        Doc, the turtle thing I assume is either 0 or infinity (my ape brain leans towards zero) but I think we could get a working numerical estimate for the simulations thing.

  • Karl Hallowell

    While current galactic conditions appear congenial for terrestrial life, there’s some reason to think that wasn’t the case before the formation of the Solar System. There were numerous galactic collisions (many of the globular clusters are thought to be nuclei of small galaxies absorbed by the Milky Way. “Metals” (in astrophysics, anything with an atomic number greater than helium tends to be called a “metal”) would have required at least one generation of supernovas to form. There’s supposed to be the product of at least two supernova in the Solar System.

    A large portion of the early stars would have been stars that supernova quickly (such as blue giants) and wouldn’t last a hundred million years much less the billion years terrestrial life would need. The black hole at the center of the Milky Way may have been an active quasar. And most of the stars that would survive a billion years in the early galaxy are red dwarfs which are both metal-poor and very weak as sources of heat or photon energy.

    In other words, it’s very possible that terrestrial life couldn’t have formed much sooner than it did. We can’t rule out that we’re first out the gate for what we see of the universe.

    • http://williambswift.blogspot.com/ billswift

      Or at least the first in our light cone, which introduces another timing restriction that is often ignored.

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      wouldn’t it be first cohort rather than very first? and the reasonable assumption is that a percentage of the cohort is already permanently ahead of us.

      • Karl Hallowell

        wouldn’t it be first cohort rather than very first? and the reasonable assumption is that a percentage of the cohort is already permanently ahead of us.

        That depends how large the first cohort is. If there’s only one member…

        Second, given that someone isn’t already using the entire energy output of the galaxy, we don’t yet have someone permanently ahead of us.

        Or at least the first in our light cone, which introduces another timing restriction that is often ignored.

        I already covered that limitation in my phrase “what we see of the universe”.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        “That depends how large the first cohort is. If there’s only one member…

        Second, given that someone isn’t already using the entire energy output of the galaxy, we don’t yet have someone permanently ahead of us.”

        If we’re part of a 1st cohort, you seem to be describing less likely scenarios and giving them emphasis because you’d like to think of us as not doomed to be subordinated or destroyed.

        I think the more likely scenario if we’re part of a 1st cohort is that there are multiple civilizations in the cohort and that some are permanently ahead of us, even if it’s checkmate in 80 moves instead of in 1 move.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        The practical attempt at a “perfect game” solution that comes to mind given that we’re probably a middle tier average civilization is:
        (1) We should develop a macro culture that mimics the best survival elements over middle tier subcultures within our civilization. Perhaps we can learn something from traditional successful “middle castes” throughout the world.
        (2) I lean more in the direction of Prof. Hanson that we should strive to remain invisible while other civilizations are invisible -and we should bet resources in that direction. Perhaps we could win a fool’s mate of survival by maximizing growth efficiency and not spending on invisibiltiy, but one shouldn’t take risks when it comes to macrocivilizational survival. So we should strive to play a perfect game. I think that means spending resources to stay symmetrically visible to the universe we see as we grow.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        “… the best survival elements over middle tier subcultures …”

        should read “… the best survival elements OF middle tier subcultures …”

        ” I think that means spending resources to stay symmetrically visible to the universe we see as we grow”

        should read ” I think that means spending resources to stay symmetrically INVISIBLE to the universe we see as we grow”

  • http://www.nancybuttons.com Nancy Lebovitz

    How likely is it that a UFAI disaster would produce effects we can see from here? I think “people can’t suffer if they’re dead” disasters (failed attempt at FAI) is possibly more likely than paperclip maximizers.

    Not sure what a money-maximizing UFAI disaster would look like, but I can’t think of any reason it would be likely to go far off-planet.

    National dominance-maximizing UFAI is a hard call, but possibly wouldn’t go off-planet. It would depend on whether it’s looking for absolute dominance of all possible territory or dominance/elimination of existing enemies.

    • http://hanson.gmu.edu Robin Hanson

      Let us call an AI unambitious if its values have no use for the rest of the universe. Then if the great filter is the main reason to think existential risks are likely, we should worry much more about unambitious unfriendly AI than just an unfriendly AI. Since designing an ambitious AI seems lots easier than designing a friendly one, maybe ambition should be the AI designer first priority.

      • http://www.nancybuttons.com Nancy Lebovitz

        Thanks for the phrasing. I’d been wondering if it was possible to build some akrasia into AIs.

      • Anonymous from UK

        This reasoning is analogous to wanting to shoot yourself in the head with a particularly loud gun, because you haven’t heard any bangs recently.

        > Since designing an ambitious AI seems lots easier than designing a friendly one, maybe ambition should be the AI designer first priority.

  • Roger

    Maybe the problem IS refugees.

    To survive and spread risk we must spread out, but there is only one ideal spot in our vicinity. Thus any colonists — hundreds of thousands of years later — do the logical thing and return back to claim their ancestral birthright. It would be the logical thing to do.

    Thus failure to spread leads to collapse, spreading leads to zero sum struggle and collapse.

    • http://mengbomin.wordpress.com/ Meng Bomin

      That would be a problem. However, keep in mind that the “colonists” going in either direction are likely to be a very small segment of the civilization that birthed them. Sending stuff through space is expensive, especially if you want it to arrive at it’s destination in one piece.

      So, if indeed colonists decide that it would be easier to live on the homeworld and they send a recolonization party, you would find that this party would likely be outnumbered and holding inferior technology, so they are unlikely to be a menace unless they are willing to destroy the resources that drew them back in the first place.

      Essentially, if a civilization is going to take the time to develop a colonization infrastructure, it would be better worth their while to press on to new worlds than to come back and attack the homeworld.

      Now, a perhaps more likely scenario is colonization line competition. Say that civilization A around star α sends out two colonization parties to stars β and γ. It’s certainly not inconceivable that the β colonists, having developed the infrastructure to start a colonization wave of their own, would set their sights on colonizing γ. The γ colonists would have done some of the work making the place more hospitable to their kind of life and they’d be a much easier target to displace than those around α.

      On the other hand, cross-pollination of colonial groups wouldn’t necessarily lead to conflict and may actually be beneficial to both groups. If indeed these colonists were biological entities, then all colonization efforts would impose a rather significant bottleneck on biodiversity, making colonies rather fragile. Cross-pollination may make colonies more robust by bringing more biodiversity to each colony.

  • IVV

    Would we even be able to identify an alien civilization even if we saw one?

    We can’t even seriously translate the languages of cetaceans. Unless the species walked and talked and used tools upon tools upon tools like humans, would we even be able to tell that it’s another civilization?

    What, exactly, is behavioral modernity? Is it necessary to colonize other worlds? Might panspermic bacteria be the alien civilization we’re looking for?

  • Lonnen

    What about wire heading?

  • William Newman

    If the great filter were essentially all in our past — all but one or two orders of magnitude — the universe would tend to look a lot like it does now. And if we raised an ultragenius locked in a box with modern chemistry and physics and biology knowledge, but only very limited (perhaps 1600AD-era) astronomical and geological observation, I think he’d likely assign a significant prior probability to there being such enormous filters in the past. So I don’t think the observed situation is so puzzling that we need to assign a high probability to big filters in our future. At this point I worry rather more about outcomes which are grim but nonetheless visible from across the galaxy than about invisible fizzles where 1M years of telescopic observation from across the galaxy has a hard time noticing that intelligence arose in the Sol system.

    Looking forward, I don’t know of any plausible candidates for filters so strong that a civilization like ours would have less than 1% chance of spreading to the stars in a visible way, or at least giving rise to something that does. Looking backward, we have several plausible candidates for filters strong enough to explain all the orders of magnitude that we could possibly want. My two favorite candidates are one that Robin Hanson mentioned, that initializing self-replication might be astronomically unlikely, and another: look to your left, look to your right, build a good telescope and look at a few millions of billions of star systems where life has formed…many of you will not graduate. If the half-life of habitability on a life-bearing planet were 5M years or so, then we as the 2^-100 surviving elite might expect to see a planetary history much like ours (with direct evidence of a few not-quite-extinction-level but not-quite events). Meanwhile, on essentially all other life-bearing planets the rubble would be bouncing far too often for complex life forms to evolve.

    I find it hard to get worried about aliens coming to exterminate us when they detect our communications. Certainly the galaxy *could* be supervised by aliens with the capability to attacking quickly across interstellar distances, with keen interest in doing so when signals are detected, with a keen interest in concealing their existence from telescopic observation, with essentially zero interest in exploiting lifeless systems, and with essentially zero interest in proactively checking in every few 10M years or so to spot the developing potential to emit signals and transition to constant local observation of the system, rather than waiting to detect signals in their remote telescopes. But in such a hypothetical the aliens are so unlikely and inscrutable that to focus on that possibility we must ignore the usual precautionary/Pascal counterarguments. Extreme outcome or not, why privilege that particular very unlikely possible world state? In particular, why privilege it over a comparably unlikely world state in which the outcome is extreme in the opposite direction? What if knowledge from the supercollider is exactly what we need save us from an impending dark matter catastrophe? What if there is indeed an inscrutable infinitely powerful God and eternal punishment for those who guess wrong, but those punished are those who when threatened by hints of a superior power are willing to cast aside all their integrity? What if the aliens only clobber those who don’t communicate?

  • Dan R

    What if we are a refuge?

    Maybe near light speed (reasonable constant fraction) and complex life don’t work together well. That would imply refuge spreading via a drop of initial life ingredients (whatever can survive the more time vs more acceleration issues, assuming we can never bend/fold space-time).

    Using lifeless hunks of rock with the seeds embedded would seem a possible way to do this. In payloads small enough that they don’t destroy the planet if we care about potentially destroying our own refuges. Maybe that’s what Tunguska was.

    Or maybe using larger asteroids to destroy any insufficiently advanced refuge culture. Maybe a meteor defense should have a higher prior due to the possibility that they might be intentional.

  • Bruce

    Perhaps all the other civilizations upload and use quantum suicide computing. From our perspective it would look like they died out because they just shut themselves off. From their perspective it would feel like they’ve managed some great computation in very little subjective time. Although this doesn’t seem like something that a super high % of civilizations would do so probably wouldn’t work as a very powerful filter.

    Paul Almond talks about it here.

    • Anonymous from UK

      This seems much more implausible than a filter in the past.

  • Anonymous from UK

    1:10^20 Filter in the past doesn’t seem implausible to me at all. Maybe the only way to make the first RNA molecule is the hard way, i.e. at random. Maybe most planets stall at dinosaurs. Maybe Most planets stall at single cells. etc. Why do we exist at all? Because the universe is spatially infinite, and once every 10^1000 lightyears some planet gets lucky on all these lotteries.

    1:10^20 Filter in the future seems highly implausible. I can’t think of anything that would so effectively prevent some descendant (vile or otherwise) from taking the free energy out in the universe and making use of it.

  • Anonymous from UK

    Also, we should note very clearly that Existential Disaster and Noncolonization of the universe are very, very different events.

    E.g. a uFAI is an existential disaster but almost certainly not a noncolonizer.

    E.g. “Vile offspring”, “uploads gone wrong” are also arguably an existential disaster, but probably not noncolonizers.

    Noncolonization seems to be relatively strong evidence for existential disaster according to our current model of how things would work out, but the converse is far from true.

  • Alexander Kruel

    That’s why I wrote:

    Possible conclusions that can be drawn from the Fermi paradox6 regarding risks associated with superhuman AI versus other potential risks ahead: The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI’s with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about. ()
    What I would like the SIAI to publish

    • Alexey Turchin

      The most dabgeorious UFAI is that use SETI channels so send its copies (that is description of a computer and a programm for it) through Universe. His aim is to use naive civilization as “cosmic common” for resending its copies father.

      Such messages should dominate the Universe.

      So, we will extinct soon after we find evidence of ET. So this explain why we find iurselves in time period then ET is not found yet.

  • Alexey Turchin

    If we have allien von Neuman probes in the Solar system , what we expect to observe?

    1) the destroy everything – nothing to observe.
    2) they are nanobots – cant be observable even if they are present in this room or in my brain
    3) they dug deep into the Moon – nothing to obserbe
    4) They fly over large sities – nobody belive your evidence about UFO – no real observation

  • erik

    so, surviving civilizations are those that defend against existential risks. their biggest risk is other civilizations, either by direct attack or grey goo accident. we see no evidence of either (even in other galaxies!), suggesting they have been successfully prevented. how might the earliest civilizations have accomplished this? only by suppressing the development of all others, while leaving no evidence to attract attention from any competitors they missed (or tip nascents off as to how to defend against suppression).

    a colonization wave would leave evidence. so it may be sensible to limit one’s own tendency to colonize, preferring instead the wide dispersal of small automated sterilizing systems to prevent the rise of competitors. this reasoning holds even for powerful post-biological entities that control the resources of say, a star. we can conclude that competitor suppression must be one of the top value priorities of the most powerful agents.

    so why have we not been suppressed by a galactic monitoring system? if the system used self-replication to disperse, it would need high replication fidelity *together* with tight limits on replication in order to avoid becoming either evidence or its own grey goo problem. thus, there are pressures against making the system as efficient as possible, as long as it is as efficient as necessary. this explains the fermi paradox and why we are still here (though we probably don’t have long to wait). consider that the sterilizers must avoid any possibility of capture — if they were to be reverse engineered and their safety limits compromised, they would be a potent grey goo threat. so there are likely pockets where the local sterilizer has self-destructed due to tampering. we might be in such a pocket, but should expect that it will not be allowed to persist long enough for us to become a threat.

    robin, in your ‘faraway wall of galactic colonization’ model, you focus only on the wild-fire dynamics of the most valuable consumable resource. if common or durable resources can support civilizations (which seems likely, eg stars), then we should expect to see almost all oases, including nearby, inhabited — unless something is actively preventing this. potential colonizers are smart enough to understand wild-fires, and likely see little utility in initiating/participating in one. in fact, they must place extreme value on preventing others from starting them, by either purpose or accident.

    a variation: perhaps ancient civilizations perform controlled burns, eliminating origin-of-life fuel, in an effort to prevent the rise of competitors. we may be the accumulating ground brush that signals the need for an upcoming purge. could periodic engineered galactic purges explain mass extinctions in the geologic record? is the most common ground-brush-observer-moment immediately prior to a controlled burn? how often should burns be scheduled to reliably prevent the fastest competitor from becoming powerful enough to survive?

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      erik,
      interesting post with some ideas I hadn’t considered before.

    • http://hanson.gmu.edu Robin Hanson

      This is the classic “berserker” scenario, which I don’t find very plausible. You remind me to post on that sometime.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        Prof. What’s the most reasonable extrapolation from our own planet’s history? Who would we be the equivalent at various points in history? At many points I assume an indigenous island community somewhere around the global median in terms of technological ability, but unable to observe other human cultures due to geographic isolation. I agree there doesn’t seem to be much beserker precedent in our past, although perhaps slow functional beserker behavior. I don’t see much evidence of civilizational hiding. Perhaps subculture hiding? Now that I think about it, cultures engaging in subculture purges in a beserker manner (and preemptive and reactive hiding by these subcultures) may be widely distributed in our history. But populations themselves don’t seem to hide much, neither do populations seem to engage in unreflective beserker purging of external populations, unless there’s a lack of internal transparency about the organizational motivation.

      • http://williambswift.blogspot.com/ billswift

        If you haven’t read the Brin story, Lungfish, I mentioned above, you might find it useful – in it “berserkers” were a primitive and fairly weak form of intelligent machine. There were many types specifically mentioned, including “police” types that hunted berserkers. But they all fell into two broad categories – pro-life and anti-life, and after a multi-million year war, no one broadcast radio to avoid attracting notice.

  • Pingback: Overcoming Bias : Fertility: The Big Problem

  • Abelard Lindsey

    Here’s your filter:

    http://www.astrobio.net/pressrelease/3661/the-universal-need-for-energy

    http://sites.bio.indiana.edu/~bauerlab/origin.html

    Put together, the two articles say that the emergence of the Eukaryote was a singular, rare event and that it is necessary both for the creation of a free Oxygen atmosphere and the rise of complex life. In short, we are likely alone because there are no other planets with Oxygen atmosphere, let alone intelligent life.

    Look at the bright side. We’re past the filter and all of that real estate out there is ours for the taking. You can’t beat that with a stick.

    • Low On Prozac

      I rather doubt I’ll be rocketing to Alpha Centauri to build my dream house. Singularity aside, it’s more than likely I’ll be part of mother earth’s compost heap.

  • Pingback: Overcoming Bias : Brain Size Is Not Filter

  • Pingback: E.T. Stay Home? | Mohawk Political

  • Pingback: Not Enough About Too Much | Benjamin Ross Hoffman's personal blog