Today Is Filter Day

By tracking daily news fluctuations, we can have fun, join in common conversations, and signal our abilities to track events and to quickly compose clever commentary. But for the purpose of forming accurate expectations about the world, we attend too much to such news, and neglect key constant features of our world and knowledge.

So today, let us remember one key somber and neglected fact: the universe looks very dead. Yes, there might be pockets of life hiding in small corners, but for billions of years billions of galaxies full of vast resources have been left almost entirely untouched and unused. While we seem only centuries away making a great visible use of our solar system, and a million years from doing the same to our galaxy, any life out there seems unable, uninterested, or afraid to do the same. What dark fact do they know that we do not?

Yes, it is possible that the extremely difficultly was life’s origin, or some early step, so that, other than here on Earth, all life in the universe is stuck before this early extremely hard step. But even if you find this the most likely outcome, surely given our ignorance you must also place a non-trivial probability on other possibilities. You must see a great filter as lying between initial planets and visibly expanding civilizations, and wonder how far along that filter we are. In particular, you must estimate a substantial chance of “disaster”, i.e., something destroying our ability or inclination to make a visible use of the vast resources we see. (And this disaster can’t be an unfriendly super-AI, because that should be visible.)

Assume that since none of the ~1020 planets we see has yet given rise to a visible expanding civilization, each planet has a less than one in 1020 chance of doing so. If so, what fraction of this 1020+ filter do you estimate still lies ahead of us? If that fraction were only 1/365, then we face at least a 12% chance of disaster. Which should be enough to scare you.

To make sure we take the time to periodically remember this key somber fact, I propose that today, the day before winter solstice, the darkest day of the year, be Filter Day. I pick the day before to mock the wishful optimistic estimate that only 1/365 of the total filter remains ahead of us. Perhaps if you estimate that 1/12 of the filter still lies ahead, a filter we have less than a 2% chance of surviving, you should commemorate Filter Day one month before winter solstice. But then we’d all commemorate on different days, and so may not remember to commemorate at all.

So, to keep it simple, today is Filter Day. Take a minute to look up at the dark night sky, see the vast ancient and unbroken deadlands, and be very afraid.

What other activities makes sense on Filter Day? Visit an ancient ruin? A volcano? A nuclear test site? The CDC? A telescope?

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Ely Spears

    Have an unfiltered beer or cigarette?

  • Dunbrokin

    What about us down under….this is an antipodianist sentiment! :)

    • warpinsf

      I sent this off earlier today to a friend in Australia with a remark about that.

  • Romeo Stevens

    Optimism: we are being kept in a fish bowl and humans who suffer greatly are p-zombies.

  • dmytryl

    Let me note that the probability of the 1 kilobit of genetic code forming spontaneously is 2^-1024 . We don’t know how much of low probability ‘miracle’ does life require, but it can’t be very little (or we’d have abiogenesis in the lab), and intuition often fails exponents. If it requires mere several times more luck than “we didn’t have it forming in the lab”, there’s simply no life anywhere nearby.

    • Rafal Smigrodzki

      Evolution does not leap, it crawls. We have plausible reasons to assume that life started in very small steps, much smaller than the generation of even as little as 1kb of genetic data. From the bizarre structure of our energy metabolism some (Nick Lane) have concluded that metabolism developed prior to the emergence of genes, as strange as it may seem. This would greatly lower the hurdle to abiogenesis. The fact that life appeared on Earth almost as soon as the physical conditions became mild enough to allow it is another reason to doubt extreme unlikelihood of abiogenesis.

      Of course, this puts a larger share of the great filter ahead of us, and is worrisome, for those who enjoy thinking about humanity’s destiny.

      Rafal

      • dmytryl

        Yes, the early appearance of life in solar system is rather problematic for the notion that probability of abiogenesis is low, but not irrecoverably so – for example it may be that the probability of long term habitability of a planet is likewise low, or it may be that life originally forms in the protostellar cloud. The issue with simple systems is not so much reproduction as ability to make small but beneficial changes – easy on a big genome, hard on a bare bones replicator. Crawl is precisely the problem as on a very simple replicator you may need leaps.

  • Thepolemicalmedic

    >> (And this disaster can’t be an unfriendly super-AI, because that should be visible.)

    I think this parenthetical thought deserves a much wider hearing/discussion, given how large a proportion of X-risk H+ers in general (and the SI in particular) assign to AI.

    • http://kruel.co/ Alexander Kruel

      One of the many problems AI risk advocates have to explain.

    • Gedusa

      Katja has a post on it, but I haven’t seen any more discussion about it: meteuphoric.wordpress.com/2010/11/11/sia-says-ai-is-no-big threat

    • gwern0

      I think Robin must be simplifying since it’s just a parenthetical comment; I’m sure he doesn’t think that ‘*every* possible unfriendly super-AI will infallibly leave visible signatures detectable with telescopes and will also not expand at the speed of light such that we die very shortly after detecting any signatures’. Some unfriendly super-AIs will leave such traces. Some won’t.

      I’m not even sure how much Bayesian evidence this could be about AI risks. Knowing that there’s no one out there, doesn’t that screen off the observation that these not-people not-made not-UFAIs?

      • http://kruel.co/ Alexander Kruel

        It seems unlikely that all AI’s that do constitute an existential risk only destroy their own civilization but don’t try to destroy others as well for the same reason and or harvest much more resources than just that of their home planet.

        If that was the case then why would any given AI consume the whole planet and not just a bit of it and only kill those beings which constitute a danger. In which case it wouldn’t actually be an existential risk and we’d end up seeing aliens somewhere.

      • gwern0

         > It seems unlikely that all AI’s that do constitute an existential
        risk only destroy their own civilization but don’t try to destroy others
        as well for the same reason and or harvest much more resources than
        just that of their home planet.

        Seems unlikely based on… what? I can think of large categories where a UFAI may just stop: military AIs, for example, after destroying the enemy which was accidentally defined as the whole world may just stop. (Notice that in Saberhagen’s Berserker universe, he never explains how the berserkers could go from ‘kill the enemy civilization’ to ‘kill all living beings’.)

        > If that was the case then why would any given AI consume the whole
        planet and not just a bit of it and only kill those beings which
        constitute a danger.

        There are many specifications which stop short of the universe but also go far beyond ‘just the beings which constitute a danger’. But even if one granted that point, the AI may need only consume a bit of the world to constitute an existential risk. Remember, the great silence is about no technological works or galactic colonization. If civilization collapsed due to an overly thorough AI which thankfully stopped short of killing everyone on the planet, would humanity ever go to space again in any form? For example, where would the coal and oil for another Industrial Revolution come from? Humanity spent 100,000 years without farming with no apparent problem and then more thousands of years with barely any progress; we could easily spend the next 100,000 years doing the same thing… until something else bad happens and the human story finally ends.

      • http://kruel.co/ Alexander Kruel

        > I can think of large categories where a UFAI may just stop…

        If you believe large categories of AI’s to stop then large categories should also be no existential risk and existential risk from AI should be much less probable as it is easier than predicted by FAI advocates to limit the scope of AI’s. 

        > Seems unlikely based on… what?  [...] There are many specifications which stop short of the universe but also go far beyond ‘just the beings which constitute a danger’.
        Based on the thought that possible alien U/FAI’s endanger many possible goals an U/FAI could have.

      • dmytryl

        gwern0: if you can think of large categories of the AI that just stop, then you’re almost all the way to the seeing the large categories of the AI that just don’t kill people.

      • http://www.gwern.net/ gwern

         dmytry:

        > if you can think of large categories of the AI that just stop, then
        you’re almost all the way to the seeing the large categories of the AI
        that just don’t kill people.

        Of course there are large categories which kill everyone in the world and stop there. I didn’t think that was really in dispute. But I’m not sure how you go from there to ‘large categories that just don’t kill people’.

        By the way, has it occurred to either you or XiXi that this argument is a dilemma for you? The Great Silence (assuming some highly dubious anthropic reasoning like the SIA if we take Kaj’s post at face-value) as evidence against the risk of creating a bad AI only works if you also believe that a bad AI would have universal consequences. So you’ve lowered the risk… by arguing that the consequences would be *so* disastrous that distant alien races could detect how disastrous it was? That doesn’t sound like it’s reducing the expected risk, and it sounds like something both of you have argued at length in the past against – that AI would be dangerous. Sure you want to take that horn of the dilemma?

      • dmytryl

        gwern:

        I don’t even believe this silence argument work, see my top level comment here. But if we are to suppose it works:

        “Of course there are large categories which kill everyone in the world and stop there.”

        Yet, the arguments in favour of difficulty/unlikelihood of non-mankind-killing AI are equally applicable to unlikelihood of non-sun-eating AI or not-galaxy-eating AI. I’m not arguing that it would be disastrous, I’m just following the logic of the existing arguments of danger of the AI.

      • dmytryl

        We still detect the AIs that expand at the speed of light, by dying. Those would make it unlikely to live some 5 billions years later than the first lot of stars of Sun-like metal content.

      • http://www.gwern.net/ gwern

        If that were the case, we could not use it to update on: we would be dead! The anthropics just doesn’t work there; we have to be alive to make observations.

      • dmytryl

        gwern: its as much anthropics as concluding the gun failed to kill you if someone tries to shoot you and fails to kill you.

      • http://www.gwern.net/ gwern

         > its as much anthropics as concluding the gun failed to kill you if someone tries to shoot you and fails to kill you.

        I, uh, don’t think that’s quite analogous. If the aliens are the gun in your analogy, then it’s a gun we don’t know exists nor that it has been fired. And if we did know that the gun has been fired, then we couldn’t conclude how dangerous it is because we would only be thinking about the danger if we survived, no matter if the bullet was harmless or super-lethal and kills 99.99% of people shot at. That’s what I meant about anthropics.

      • dmytryl

        This ‘because’ is anthropic nonsense. If I am making drone software, and it detects a missile launch, missile going after it, and the missile fails to destroy it, the good software will update it’s estimate on the efficacy of this type of the missile, the same way as if the missile was saying “bang”.

      • http://www.gwern.net/ gwern

         > This
        ‘because’ is anthropic nonsense. If I am making drone software, and it
        detects a missile launch, missile going after it, and the missile fails
        to destroy it, the good software will update it’s estimate on the
        efficacy of this type of the missile, the same way as if the missile was
        saying “bang, you’re dead!” instead of destroying drone.

        Huh? This doesn’t seem like a reply. What is this drone? Where’s the missile launch? How does this analogy apply to our situation at all?

      • dmytryl

        gwern: it’s a direct reply to your

        “then we couldn’t conclude how dangerous it[gun] is because we would only be thinking about the danger if we survived”.

        The former simply doesn’t follow from the latter. Which I hoped would get clearer for you if mankind thinking about it is replaced with drone processing data only if it survives.

        On the broad point, I disagree that the conclusions are in any way dependent on whenever we haven’t detected aliens by non arrival of harmless light or by not having been eaten. It simply doesn’t matter (or shouldn’t, for rational persons that do not believe in souls).

      • http://www.gwern.net/ gwern

         > The former simply doesn’t follow from the latter. Which I hoped would
        get clearer for you if mankind thinking about it is replaced with drone
        processing data only if it survives.

        You need to explain your analogies better… So in this case, mankind is the drone being attacked by a missile, the missile explodes and fails to kill the drone. But this is the same thing as the old firing line anthropic question: a man is sentenced to be executed by firing squad, they fire, and he then thinks “wow, they must be really awful marksman, maybe drunk or something, for all of them to miss!” I don’t think many would agree with him.

        And the drone and the firing squad are both different in a way that I’ve already pointed out and you have ignored: they know that a missile was launched or the firing squad shot, respectively. We don’t know any such thing.

      • dmytryl

        > But this is the same thing as the old firing line anthropic question: a
        man is sentenced to be executed by firing squad, they fire, and he then
        thinks “wow, they must be really awful marksman, maybe drunk or
        something, for all of them to miss!” I don’t think many would agree with
        him.

        Well, I wouldn’t process it any different than if they were shooting from nerf guns. The me that thinks they are poor shots wouldn’t exist if they didn’t all miss. Doesn’t matter if whenever alternative is me that is dead or some me that is thinking damn it the nerf got stuck to my eye.

        > And the drone and the firing squad are both different in a way that I’ve
        already pointed out and you have ignored: they know that a missile was
        launched or the firing squad shot, respectively. We don’t know any such
        thing.

        I don’t disagree that we don’t know. I disagree that destruction vs detection makes difference.

      • http://www.gwern.net/ gwern

         > I don’t disagree that we don’t know. I disagree that destruction vs detection makes difference.

        …wow. OK, I’m not an anthropics expert so I have no idea how to persuade you that that is crazy. Maybe Katya could chime in, but I’m not counting on it.

      • dmytryl

         > …wow. OK, I’m not an anthropics expert so I have no idea how to persuade you that that is crazy.

        You have no idea, period.

        Urg and Ugh live in a land with different kinds of berries, some extremely poisonous, some safe. Urg eats one type of berry, and if he doesn’t die, concludes it is safer than others, sticks to that type. Ugh eats one type of berry, confuses himself with irrelevant counterfactual (if it wasn’t safe I wouldn’t be alive), and just eats what ever berries. Urgs has 50/50 chance of survival and his progeny survives. Ughs die out.

      • http://www.gwern.net/ gwern

         > Here’s another one where it is pretty simple. Urg and Ugh live in a
        land with two different kinds of berries, one extremely poisonous, one
        safe. Urg eats one type of berry, and if he doesn’t die, concludes it is
        a safe one, sticks to that type. Ugh eats one type of berry, confuses
        himself with irrelevant counterfactual (if it wasn’t safe I wouldn’t be
        alive), and just eats what ever berries. Urgs got 50/50 chance of
        survival and their progeny survives. Ughs die out (albeit now and then
        mutations create new Ughs). The way Ugh ignores evidence is, actually,
        the insane one, suicidally so.

        Just going to point out that this is just the same wrong analogy you’ve been pointing out the entire thread (and thereby confusing the issues by ignoring the original questions in favor of new misleading or incomplete examples): we are not in the position of either Urg nor Ugh. We have eaten no berries that we know of.

      • dmytryl

        gwern: And because we “know of no such thing” (have probabilistic knowledge), the survival is to be processed differently from not seeing some odd light, or what?

      • http://www.gwern.net/ gwern

         > And because we “know of no such thing” (have probabilistic knowledge),
        the survival is to be processed differently from not seeing some odd
        light, or what?

        Er, yeah. If you don’t have certain data, you don’t reach certain conclusions. Shocking, I know.

      • dmytryl

        gwern: What ever, at least you seem to finally got it about the firing squad, even if you still don’t get it that it makes no difference whenever the super-AIs expand at near speed of light and kill us, or are merely visible.

      • Katja Grace

        Because Gwern asked me to chime in:

        According to SIA, and to me: not being killed carries the same information as not observing X, if the scenarios are the same except that being killed is replaced by observing X. 

        According to SSA: not being killed tells you nothing, if your reference class is people who are alive now. If your reference class is people who were alive at some point, then not being killed is informative.

      • Carl Shulman

        Gwern,

        We make some specific observations: we are alive, we are billions of years into the history of the universe, and we don’t see any aliens.
        Either an alien colonization wave reaching the Earth or visible signs of aliens would have produced different observations. An increased tendency for aliens to evolve and signal or spread would lower the frequency of young civilizations with observations like ours in a given region of space. So we get the Fermi paradox, by SIA.If you want to (a la some forms of SSA) consider the observations of a representative sample of randomly selected young civilizations (drawn from a list of such civilizations throughout the history of the universe), our observations would be more atypical if aliens evolving to spread or signal were common: most civilizations would exist earlier in the history of the universe, before there had been time for colonization waves or visible signals to reach us.So by the same logic used by Bostrom and Tegmark to rule out frequent astrophysical catastrophes destroying most planets like ours, one can infer that it is very unlikely that colonization waves pre-empt almost all observations like ours:http://arxiv.org/abs/astro-ph/0512204

        The same logic has been applied to a vacuum transition destroying everything with lightspeed expansion. There’s no relevant difference between that and an alien colonization wave pre-empting our observations in either of these frameworks. So we still have the Fermi paradox.

      • dmytryl

        Katja: Thanks. I’ve posted in the other thread describing my view more exactly. I just treat question “where we are” as “what’s around us”, when a theory provides for several observers that’s several theories with a specific observer, the theories may have some sensible relations between their priors, or not.

        edit: and SSA pretty much arbitrarily ignores evidence by lumping together things into a “reference class”.

    • http://www.facebook.com/yudkowsky Eliezer Yudkowsky

      AFAIK, all SIAI personnel think and AFAIK have always thought that UFAI cannot possibly explain the Great Filter; the possibility of an intelligence explosion, Friendly or unFriendly or global-economic-based or what-have-you, resembles the prospect of molecular nanotechnology in that it makes the Great Filter more puzzling, not less.  I don’t view this as a particularly strong critique of UFAI or intelligence explosion, because even without that the Great Filter is *still* very puzzling – it’s already very mysterious.

      • Katja Grace

        It is a critique of the intelligence explosion roughly as much as it is a critique of any prediction of far-reaching human expansion.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        It is a critique of the intelligence explosion roughly as much as it is a critique of any prediction of far-reaching human expansion.

        How do you get that? The intelligence explosion would seem to imply a far-reaching expansion more strongly than does a far-reaching expansion imply an intelligence explosion. (At least sci-fi authors seem to agree.) But in any event, it’s not obvious to me that they’re comparable.

  • http://www.facebook.com/jstorrs.hall J Storrs Hall

    You only need posit that the aliens as a matter of course develop versions of the FAA, FCC, and EPA.  No other explanation is necessary of why they haven’t visited, called, or built any visible structures.

    • Hedonic Treader

      Take this argument seriously for a moment, and then analyze how plausible it is. Will all aliens inevitably create institutions and policies that stifle their growth and expansion completely, with zero exceptions? If this were true, why didn’t the alien J Storrs Halls successfully complain their way out of it?

  • a dude

    This should be the day to celebrate the improbable good luck that made us get through the whatever filter is behind us. Can claim that 1018 of the planets we see are simply not suitable for the form of life we can recognize as such. Heck even most of our planet looks no better than the Mars pictures sent by Curiosity (think Bakersfield, CA). Remaining civilizations just may have chosen not to manipulate large amounts of energy in a way that we’d see, because there is no point. Physical world could be too boring as the complexity of multiplying uploaded brain emulations exceeds physical world’s complexity by multiple factors. Take the blue pill and celebrate the Matrix

  • VV

    While we seem only centuries away making a great visible use of our solar system

    Are you sure? http://physics.ucsd.edu/do-the-math/2011/10/why-not-space/

    Even optimistic scenarios of space exploitation are typically limited to some mining operations on the asteroids, plus perhaps (but even less likely) some permanent colony on Mars or the Jovian and Saturnian moons. Not something an alien observer would notice by looking at our star from their world. Even covering the Sun with a Dyson sphere (probably a physical impossibility) wouldn’t really put up a great show.

    and a million years from doing the same to our galaxy

    Our galaxy has 100 – 400 billion stars, distributed in a volume of 3 – 4 * 10^13 light-years^3, with a diameter of 1.2 – 1.0 * 10^5 light years.

    So, unless we started expanding at relativistic speed (10% of the speed of light: the typical speed of an electron in a cathode tube, according to Wikipedia), we couldn’t possibly colonize the galaxy within that time frame.

    (It takes 4.5 * 10^14 Joules to accelerate one kilogram of mass to 10% of the speed of light, without any losses)

    If that fraction were only 1/365, then we face at least a 12% chance of disaster. Which should be enough to scare you.

    Why should I be scared if my distant descendents never make it off the earth? Anyway, the “disaster” could very well be just the enormous distances between the stars in the galaxy.

    • Hollister David

       A lot of Murphy’s arguments are wrong. See http://hopsblog-hop.blogspot.com/2012/02/in-his-blog-stranded-resources-tom.html

      I agree the distances between star systems would prevent expansion to other stars. But expansion through our solar system is more plausible than Murphy’s bad math and silly straw man arguments would have you believe.

      A solid sphere about the sun is impossible. However  a Dyson swarm is possible http://en.wikipedia.org/wiki/Dyson_sphere#Dyson_swarm

      Main belt asteroids only receive a fraction of the sunlight earth does. Large parabolic mirrors could be constructed  at these locations to harvest sunlight. A system of parabolic mirrors in the main belt would be a start towards a Dyson ring.

      • VV

         Thanks for sharing.

        I’m not qualified to evaluate your math over Murphy’s. He did mention that he made some simplifications, but, as you point out, given the exponetial nature of the rocket equation even a small difference in terms of delta-v budget can make a great difference in terms of propellant mass.

        The idea of using the lunar ice caps as a source of fuel seems interesting, but is certaily highly speculative at this point. Assuming it is technically feasible, it would require the construction of massive solar or nuclear powered infrastructure on the Moon in order to extract the ice and split it into oxygen and hydrogen.

        Anyway, travel to Mars and back, considering aerobraking, all the orbital mechanics tricks and even possibly lunar refueling, is always going to cost lots of energy, all to get to a barren planet without large ore resources, nothing worth the cost of bringing it back.

        As for asteroid mining, you mention that capturing pieces of ~20 m diameter extracted from some near-Earth asteroids when they pass close the the Earth-Moon Lagrange points could cost ~1 km/s delta-v.

        Assuming this calculation is correct, and that object is mostly iron, it has a mass of 6.3*10^7 kg. Assuming an oxygen-hydrogen propellant and ignoring any additional overhead, the rocket equation says you need 1.6*10^7 kg of propellant, which is about 2.7 * 10^6 kg of hydrogen, and has an energy content of 3.3 * 10^14 J.

        At the liquid hydrogen price of 3.6 $/kg reported here (which refer to 1980 and are not adjusted for inflation): http://www.astronautix.com/props/loxlh2.htm the fuel would cost ~10 milion dollars. Iron ore price depends on the quality, but is well below 1.0 $/kg, making the asteroid mining business unfeasible just due to fuel costs.

        Even if technological developments could cut the costs of fuel production, and all the other huge costs as well, 6.3*10^7 kg of iron ore every now and then would be a minuscle fraction of the world annual production, which amounts to 2.4*10^11 kg. The business would lack the economies of scale respect to the conventional iron mining business.

      • Hollister David

         I published that blog post in February. About 2 months later the formation of Planetary Resources was announced. Also in April a pdf was published: http://kiss.caltech.edu/study/asteroid/asteroid_final_report.pdf Many of the co-authors of this paper are part of the Planetary Resources team.
        I had suggested some asteroids could be parked in high lunar orbit for as little as .3 km/s. The pdf  points to an asteroid that can be retrieved with as little as .17 km/s. It was gratifying to see this paper back up my numbers and to show I was even being pessimistic.
        On the bottom of page 15 of the KISS pdf they talk about safety. They advocate (as I did) retrieving asteroids small enough to harmlessly burn up in the upper atmosphere should the rock hit the earth.
        Their first asteroid mined will likely be a water rich asteroid (As I mentioned propellant high on the slopes would break the exponent in Tsiolkovsky’s equation). The 2nd substance mined might be PGMs, not iron. See http://www.planetaryresources.com/asteroids/usage/

  • Doug

    C’mon Robin, winter solstice is good, but you really should have waited till the Mayan apocalypse tomorrow. 

    Then you would have had some chance of being the rationalization for turning the huge drinking day that people will be celebrating tomorrow into a recurring annual holiday.

    “Filter day: celebrate now for our civilization is probably doomed!” Kind of like a cosmic Cinco de Mayo.

  • Pingback: Merry Filter Day « Selfish Meme

  • http://www.selfishmeme.com/ The Watchmaker

    Excellent idea. I made a short post to introduce the concept to my readers. http://www.selfishmeme.com/217/merry-filter-day/

  • dEMOCRATIC_cENTRALIST

    What does overblown concern with the distant fate of humanity signal? That’s the question that should occupy Robin, rather than transhumanist speculation. What do the existential-risk “charities” signal?

    My current hypothesis: concern with long-term existential risk signals loyalty to the community. It says, “I’m loyal to the community; the community ought to accept me fully as a member.” It appeals primarily to nerds because, even when they attain high status, they remain social outcasts (for being bores). They experience the need to convince the community to accept them by demonstrating their loyalty.

    • Hedonic Treader

      What does overblown concern with the distant fate of humanity signal?

      This question is only relevant insofar it affects the causality of the actual fate of humanity. Why else would anyone care about the answer, except for signaling anti-signaling signals, which is as childish as you can get.

      • dEMOCRATIC_cENTRALIST

        Relevant to what?

        What’s it’s relevant to is self-knowledge. You’re saying that if it isn’t relevant to the future of humanity, then it’s irrelevant as such. This is the worst form of dogmatism, and it’s the result of a gaping lack of self-knowledge among self-righteous transhumanists.

        We don’t say, after all, why is the signaling function of charity in general worth knowing unless it contributes to charitable contributions.

      • Hedonic Treader

        Fair enough. I don’t care very much about such knowledge unless it’s useful, I probably don’t want to know everything about my psychology. If we’re talking self-knowledge for its own sake, I’d rather engage in cheap entertainment instead.

    • John Maxwell IV

      If your comment is actually meant to be relevant to the possibility of existential risks, this seems like a textbook ad hominem attack.  “Let’s ignore what Robin’s saying and focus on why he might be saying it!”

  • Vlad

    Perhaps we’re intentionally kept in the dark by those who have already colonized the galaxy (and who had a head-start of a few billion years).

  • http://www.facebook.com/zgochenour Zac Gochenour

    Filter Day is just too close to Christmas.

  • chepin

    An unfriendly super-AI should be visible making the assumption a super AI must consume a lot of resources to be visible.  Maybe this AI is procrastinating on us because we are not a threat or a challenge.

  • soreff

    A dramatic reading of Clarke’s “A Walk in the Dark”?

  • Jonathan Colvin

    The most credible explanation for me is that once societies hit the singularity, life inside machine heaven becomes far more interesting/entertaining than life in the relatively boring, huge expanse of the real. We might send out von neuman probes or have our AI keep a look to windward for any potential problems, and we might keep quiet to avoid attracting potentially unwelcome attention, but really, what would be point of heading out into the desert when we can have whatever virtual heaven we desire (24/7 sex inside a giant cupcake if that turns your crank) inside the machine?

    • ShardPhoenix

       If civilization is common and things like von Neumann probes  are practical, it only requires a small minority of people/civilizations who want to expand (perhaps for irrational reasons) to fill up the sky.

  • warpinsf

    “If so, what fraction of this 10^20+ filter do you estimate still lies ahead of us? If that fraction were only 1/365, then we face at least a 12% chance of disaster.”

    I don’t understand the calculations behind this. 10^20/365 ~= 10^17. Where does the 12% come from? Am I misunderstanding that a 10^20 filter means a one-in-10^20 chance of passing the filter?

    • Robin Hanson

      Try 1 – exp(log(10^-20)/365).

      • warpinsf

        That formula results in 0.05.

    • Tim Tyler

      I think it is: 1-10^(-20/365) = 0.1185.

  • warpinsf

    Also, I tend to prefer the terminology common in venture capitalism, where there is a harsh filter between an initial idea and a successful company.

    In that space, progress through the various filters is called “de-risking” [1]. The “filter” is considered a huge pile of risk that gets eaten up until the business either succeeds or runs out of money. 

    This is largely how I see humanity’s shot at exploiting the solar system and beyond. We’re endowed with “funding” in the form of earth, and we’ll either remove what separates us from galactic greatness or we’ll fail and converge on a much less desirable lifestyle.

    [1] http://www.mbi.org/derisking.html

  • rrb

    I assumed that this has been addressed before, but, what if civilized galaxies just go dark? It seems like a waste to let all those photons escape to the void instead of harvesting them.

    How do we know that the night sky isn’t full of civilized, invisible galaxies?

    • VV

       *cough*second law of thermodynamics*cough*

    • Tim Tyler

      Living systems typically don’t care too much about waste because of concern for the present.  When the choice is between burning bright and burning long, burning bright often wins. Resources are needed now and the long term is just too far away to matter. That line of reasoning suggests not too many dark galaxies.

  • mjgeddes
  • http://web.mac.com/deweaver Dallas Weaver

    If other life is out there and expanding over many solar systems, what emissions could we necessarily detect?  If our understanding of the laws of physics are correct, their expansion rate would be much less than the speed of light — no warp drives.  All energy supplies, even fusion, will take resources and will be conserved and not just emitted into space with little directional accuracy.  They wouldn’t have become a great civilization without considering energy efficiency.  Any beamed energy such as for communication between solar systems will be very high frequency and extremely directional with near zero probability of hitting earth (possibly in the soft X ray range).  

    As our civilization advances, is the range where our civilization can be detected decreasing as old wide beam radar is replaced by highly directional systems using far less energy and over the air broadcasting goes the way of the Dodo?

  • Jonathan Colvin

    If the “filter” is all of us downloading into some machine heaven computational substrate, then observing the vast dead expanse is not something to be afraid of; rather the opposite. It means we are unlikely to be disturbed while we revel in our giant cupcake. 

  • Rick

    Only an american could talk so expansively about the universe and then assume all literate life resides in his part of the earth (it’s not winter solstice here).

    • http://overcomingbias.com RobinHanson

      ~90% of humans live in the Northern Hemisphere. (source)

  • Rafal Smigrodzki

    Robin, I don’t quite understand the specific assumptions behind the calculation you made (although I do get your general point). I think it is useful to split the great filter into two: the pre-civilization filter and the expansion filter. The pre-civ filter acts at all phases of development of the putative 10e20 life-capable planets and thus shapes the number of civilizations at our stage. The expansion filter is meaningful among actually existing civilizations and describes the likelihood of being unable or unwilling to expand fast enough to be seen. It is hard to estimate our chance of success without making at least implicit assumptions about the relative contributions of these two filters. The processes likely to contribute to each of the filters are also likely to be diverse and different: pre-civ means gamma bursts, close stellar encounters and various development blocks (metabolism-genes, single-multicellular, instinct-general IQ), while the expansion filter might include total war, civilization-wide ennui, some yet unpredicted development block (e.g. practical impossibility of building starships capable of flying fast enough to produce visible spheres of civ influence in time to be seen by us).

    Questions abound, and I find it hard to even articulate the assumptions needed for a vague estimate of risk, although in some general sense I share your worry about our chances.

  • Tim Tyler

    Only consider the planets in our galaxy with liquid water and we are already down to around 10^10 candidates.

  • Linas Vepstas

    Repost from g+

    Well, b-duhh, isn’t it obvious that we live in a simulated universe, where we ourselves are the object of simulation? Or rather, what is being studied is the birth of super-AI? So, of course, the starting point is a universe devoid of super-AI. :-)

    Oh, b-duhh, just to save a bit of energy and expense on compute power for such a simulation, the simulators have avoided generating an excess of entropy, and are thus using quantum to explore multiple possibilities for no extra cost. Right? Isn’t this where the facts are pointing?

    On the other hand, we also don’t know what fraction of brown dwarfs are super-AI’s. I mean, for all we know, they are just playing WoW all day long, navel-staring, and not pursuing universal domination.

    Why would they do that? Perhaps they’re psychotic? Evolution has very carefully crafted the human brain to be functional, yet its clear by looking around, that, in a certain sense, everyone is a bit crazy. And some fraction of humanity really is certifiably crazy. Perhaps sanity is not automatic, but something that must be carefully tuned and adjusted: an unstable fixed point, a thing to be almost lost at any time, by the smallest perturbation. So maybe all advanced AI’s simply go insane before they get to planetary scale. Who’s to argue otherwise?

  • Mark

    “Yes, it is possible that the extremely difficultly was life’s origin” <— what?

  • Pingback: Overcoming Bias : Future Filter Fatalism

  • TheMechanicalAdv

    This makes no sense. A few decades ago, we were afraid of aliens. Now we should be afraid of the lack of aliens? How about we just stop being afraid? The earth should be big enough for all of us!