Very Bad News

Back in ’98 I considered the “doomsday argument”:

A creative argument [suggests] “doom” is more likely than we otherwise imagine. … [Consider] the case of finding yourself in an exponentially growing population that will suddenly end someday. Since most of the members will appear just before the end, you should infer that that end probably isn’t more than a few doubling times away from now.

I didn’t buy it (nor did Tyler):

Knowing that you are alive with amnesia tells you that you are in an unusual and informative situation. … The mere fact that you exist would seem to tell you a lot.

I instead embraced “self-indication analysis”, which blocks the usual doomsday argument.  In ’08 I even suggested self-indication helps explain time-asymmetry:

Even if we knew everything about what will happen where and when in the universe, we could still be uncertain about where/when we are in that universe. … [So] we need … a prior which says where/when we should expect to find ourselves, if we knew the least possible about that topic. …  Self-indication … says … you should … expect more to find yourself in universes that have many slots for creatures like you. …

Given self-indication we should expect to be in a finite-probability universe with nearly the max possible number of observer-moment slots.  … [which] seem large enough to have at least one inflation origin, which then implies … large regions of time-asymmetry.

Alas, Katja Grace had just shown that, given a great filter, self-indication implies doom!  This is the great filter:

Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?

And here is Katja’s simple argument, in one elegant diagram:


Here are three possible worlds, and within each possible world three different planets are shown on the X axis, while three different times are shown on the Y axis.  The three worlds correspond to three different times when the great filter might occur:  1) before any life, 2) before intelligent life, or 3) before space colonization.

After at first thinking you are in a random box, you update on the fact that your planet recently acquired intelligence, and conclude you are somewhere in the middle row.  Then you update on self-indication, i.e., that you exist, and so are in an orange box.  You conclude you likely live in world 3.  (It has 3/5 of the orange boxes.)  Doom awaits!

The diagram just illustrates the general principle.  As Katja disclaims:

The small number of planets and stages and the concentration of the filter is for simplicity; in reality the filter needn’t be only one unlikely step, and there are many planets and many phases of existence between dead matter and galaxy colonizing civilization.

Alas I now drastically increase my estimate of our existential risk; I am, for example, now far more eager to improve our refuges.  And let’s avoid the common bias to punish the bearers of bad news; Katja deserves our deepest gratitude; fore-warned is fore-armed.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • You can prove most anything you desire with such calculated hypothetical scenarios. We don’t know what the odds are. You might as well be rolling dice.

  • Angels, pinheads. . .

    • Jayson Virissimo

      Myths about the Middle Ages never seem to die, but are taught anew to each generation of college students.

  • But Robin—We know we’re not in World 3, because we’ve got observations on 7 or 8 “planets”—namely the various continents here on earth where life developed independently for hundreds of millions of years—and only one of those (Africa) got as far as intelligent life. That tells us where the (or at least a) filter is.

    • The total filter is huge, as there are 10^22 stars in the visible universe, and perhaps many more continents. Knowing where a factor of 7 of that filter is still leaves a lot of the filter unknown to be found.

    • Konkvistador

      You can’t really confidently say only one continent devloped inteligent life. While its clear that the latest batch of surviving hominids are from Africa we can’t be sure that in the past no migrations in and out of Africa occured (several mamalian species follow this pattern). Hobbits have shown us strange hominids pup up in unexpected places and times.

      I think its more fair to say we have two reasonably large worlds. The World Isle and the Americas and inteligence developed on one of those.

  • Rain

    That’s all it took to change your mind? You make it sound as if you were in the 10 percent range, and a single, hypothetical blog post raised your opinion to 90 percent.

    When talking about the end of human civilization, I’m surprised it takes so little to sway you so much.

    • This blog post is real, not hypothetical. And how many posts should it take? Or is it that I should only be convinced by articles or books?

      • But isn’t the argument leaving off a fourth possible world, where the filter occurs after interstellar colonization, which would put doom far off, and be just as likely as world three?

      • The point is that we can exclude the scenario where a large fraction colonize enough to become visible now, as nothing is visible now.

      • Dániel Varga

        The point is that we can exclude the scenario where a large fraction colonize enough to become visible now, as nothing is visible now.

        We cannot exclude the scenario where there are many large colonizer civilizations but we still cannot observe any of them. Maybe they are all expanding with the speed of light.

        The previous sentence is an extremely condensed version of a pet theory of mine. Let me share a bit more detail. The main idea is a 0-1 law for the expansion speed of civilizations. I argue that there is only a very short timeframe in the life of a civilization when their sphere of influence is already expanding, but not yet expanding with exactly the speed of light. If they are before this short phase transition, they can’t be observed with current human technology. After the phase transition they can’t be observed at all.

  • Rain

    Care to make a prediction?

  • Ryan S.

    This is a single short recently published blog post and a couple of cute pictures – how sure are you that aren’t as mistaken in your (strong) reaction to this as you apparently were to previous arguments about this issue?

  • Dre

    What immediately pops into my head is a rough proof by contradiction.

    Assume Doomsday argument is true.
    1) Choose person A at random time t1.
    2) Apply Doomsday argument.
    3) Conclude person A is ‘likely’ to be in the final generation of humans.
    4) Choose person B at random time t2.
    4)-5) same as 2)-3)

    (3) and (5) cannot both be true, therefore Doomsday argument must be false.

    Or in other words, this seems to have no predictive power because it can be applied to any individual; it does not constrain expectations. (Or, what would count as evidence against it?)

    Am I not seeing something?

    • Jack (who uses this name at LW)

      3 and 5 both can be true (and according to the argument, are both true).

      • Vladim Eisenberg

        Probably late to the party, but doesn’t it (anthropic doomsday argument) then reduce to the trivial observation that the later along a finite process you are, the closer you are to its ending? I mean, you *have* to be closer to the final generation of humans than your ancestors simply because *you* exist thus their generation could not have been final.

    • Steve

      You’re randomizing across time instead of across birth order. The anthropic doomsday argument works by birth order; and if you’re truly picking people at random by birth order from the first homo sapiens sapiens ’til now, they’re pretty likely to have been born reasonably recently.

  • Tim Tyler

    Characterising this idea as “DOOM” seems to be inaccurate and misleading. The idea does not show that some organisms fail completely – but rather that they don’t make rapid progress towards a galactic civilisation – due to some kind of roadblock or other. A progress roadblock is just not the same thing as “DOOM”.

    • Konkvistador

      I think people tend to jump towards this due to being unable to imagine “progress” ending or slowing down considerably. I don’t know how complex a system humans can build, we’ve climbed very far for a species were only a small fraction has been selected for anything else but functioning in a population of a hundred or so individuals. I’m not even sure our current level of complexity can be maintained or if we are experiencing growth which’s gains are bound to dissapear once we finish eating next seasons seeds.

      • Tim Tyler

        The world is full of illogical nonsense about the end of the world. It makes for good headlines and it gives people a warm fuzzy feeling to be heroically warning others about impending DOOM. The end of the world is a contagious mind virus which forms the basis of many cults. In this particular case there is a huge logical hole in the supporting argument.

  • Tim Tyler

    Personally, I think things have moved a bit beyond the point where anthropic reasoning tells us much of interest about this issue. We can see – in some detail – the events in the past that caused hold-ups – and have a pretty good idea about the hold-ups and delays that we will face in the future. An argument from mere existence ignores far too much pertinent information to tell us very much that we didn’t already know.

  • Many worlds theory and the simulation argument seem to cause significant problems for this type of anthropic reasoning. Some bias toward the creation of universes with early filters could in fact put most conscious observers in the multiverse in universes of Type 1 or Type 2.

  • Tim Tyler

    Re: “Alas I now drastically increase my estimate of our existential risk”

    This seems like a pretty trivial reanalysis of what we already know. It tells us next to nothing about the probability of “existential risk”. So: I think you should reconsider your position.

  • Robin,

    I think that the magnitude of the animal -> human filter may be slightly greater than expected. One of the primary hypotheses in early human evolution is that persistence hunting gave a large and necessary caloric boost to early hominids.

    Persistence hunting ( )is a general hunting strategy in which a hunter chases their prey over a very long distance(15-30 miles). Eventually the prey becomes weak and succumbs to exhaustion. On its own, that’s not very special — both dogs and hyenas adopt a very similar hunting strategy. The key difference is that humans lack the extremely sensitive sensory abilities that are used by other persistence hunters.

    In the absence of such senses, early hominids had to predict where prey would go and accurately pursue them over very long distances in the complete absence of any sensory information. In short, early hominids hunted by simulating the minds of their prey. If you look at (a documentary section on persistence hunting), you can watch some Kudu tribesmen literally simulating where an antelope will go.

    I’m no expert on AIXI, etc., but it certainly seems to me like lucking into an evolutionary niche where you get caloric benefits that are directly linked to how well you can simulate the fairly-complicated minds of your prey is pretty much a recipe for extreme selective pressure in favor of general intelligence.

    This isn’t the best general argument for where we are on the filter timeline, but is mainly intended as a simple illustration for just how weird and unlikely human evolution could have been.

  • MPS

    It seems to me you’ve made a mistake in “updating” your Bayesian probabilities.

    When you drew the three “worlds,” you drew each with “one” space colony. Presumably, the rationale here is that the number of observers in any world is dominated by those in space colonies, so we employ SIA by normalizing worlds by their number of space colonies. Fine; then worlds 1-3 are equally likely.

    But then, you add new information: we know we are not members of a space colony, so we restrict to a reference class of observers more like us. However, why shouldn’t we do this from the beginning? That is, why shouldn’t I draw worlds 1-3 all with “one” observer in the center row (and therefore world 3 has 1/3 of a space colony)? Then we’re equally likely to be in any world, as SIA intuitively suggests.

  • Robin

    Okay, I believe this as far as “we’re unlikely to colonize space”. Maybe I don’t understand the original doomsday argument, but it looks to me like there are three “no space colonization” scenarios:

    a) eventually stable population
    b) eventual extinction
    c) continuing exponential growth without space colonization

    Evidently the self-indication effect mitigates against a, but it actually makes c more likely! In a “c” world we continue to discover new technology that allows for additional population growth, so every point in time looks like now.

    So I think I believe in impending doom only if someone convinces me that c is impossible…

    • Why would every point look like now with more population growth? Perhaps if the growth was in simulations some of the experiences would look like now, but future people will mostly not believe they live in the 21st century. Knowing we do live there, worlds with more people who definitely don’t do not become more likely under SIA.

      You are right that this argument is for any of those options, but B seems the most likely of them because of the seemingly small chance that a civilization could last for so long without any of its members going far elsewhere – it needn’t be all of them.

      • I am not sure I completely get the galactic colonization argument. Its not entirely clear to me why we would want to go to other stars.

        Now if Robin is right that eventually high breeding subsets will take over then there is a resource argument. It seems likely that we will eventually need another source of energy besides the sun.

        At the most extreme levels we might run out of matter here in the near solar system. But this seems like an extremely far out scenario. Way beyond running out of energy given that we can manifest people as electronic or potentially even photonic signals.

        However, it seems easy enough that we just implement population controls and not worry about it.

        My thinking is that there might be things out in the universe that are dangerous to us. Indeed, that should be our baseline assumption since if there are two perpetually expanding intelligences only one can survive.

        Our hope should be that they, like us, decide to keep to their neighborhood and not invite trouble.

        Moreover, if we can rewrite people’s source code then we can just write out the instinct to reproduce and it seems that the problem is solved.

      • Robin C

        I only meant that every point looks like now in the sense that “it’s been exponential growth so far and my failure of imagination regarding future tech makes it look like that can’t continue much longer”.

        As Karl points out my “c” is mistaken – eventually there isn’t enough matter for exponential growth to continue.

        However, I don’t think self-indication gives us a reason to believe in:

        b) eventual extinction vs.
        c) exponential growth until material constraints kick in

        at least not in the simple “Since most of the members will appear just before the end, you should infer that that end probably isn’t more than a few doubling times away from now” version. It only tells us that if we already believe “c” to be impossible we should believe the eventual extinction will occur soon.

        Re: “B seems the most likely of them because of the seemingly small chance that a civilization could last for so long without any of its members going far elsewhere – it needn’t be all of them”. I’m not sure what has to count as “space colonization” for the great filter part of the argument to work – lots of civilizations could have a small number of members go a small number of places elsewhere without running across each other. But I really don’t want to start multiplying massive numbers by tiny numbers and trying to make sense of it!

  • Chris

    When you consider all of the species that have been on this planet and that only one (that we’re aware of) has ever created a technical civilization, the filter at step two starts looking pretty big.

  • Stephen

    Doesn’t Katja’s argument assume that our existence in the “non-space-colony” state is compatible with observing other species’ space colonies? This is not necessarily the case. If there exists an expansionist space traveling species, why would that species not have already appropriated the resources of our own planet, thereby preventing our own evolution?

    In other words, the observation “I observe no space colonies” contains little to no more information than the observation “I live in a one-planet, pre-spacefaring society”. (This is the Fermi’s paradox argument.)

  • Mir

    Robin, I think that there is a major flaw in the great filter argument.

    “But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future.”

    The fact that space near (and far) us seems dead tells us nothing (or almost nothing) about existence of advanced civilizations. They can really be using not just resources of our nearby stars, but very atoms on our earth and even our own atoms for their existence without us knowing that they are here.

    Even on our earth, many species exists near each other, where the dumber species is not aware of existence of more intelligent species (e.g. bacteria in our guts are not aware that we are also here). If a civilization at some point gets to some form of technological singularity (i.e. some form of intelligence explosion), the difference in intelligence between pre- and post-singularity intelligences is much much bigger than between us and bacteria.

    It would certainly be trivial effort for such a civilization to use our resources and hide from us:
    Maybe they are so advanced that they live on totally different spatio-temporal scales (e.g. they use subatomic processes for computation) and each time we perform some sort of physical experiment they know that well in advance (because it is trivial effort for them to read out information from all human brains) and let alone that space-time slot of universe where we perform the experiment, so that it seems to us that these atoms behave according to “ordinary” physical laws thus concluding that there is no one there.

    It seems to me that anyone who claims that no one is there from the fact that we see no one, did not grasp the concept of technological singularity. It would be certainly be ridiculous to expect that such an advanced civilization would come with some metal spaceships and flying around. For me, the fact that such civilizations actually are here is the most obvious explanation for Fermi-Paradox.


    • Tim Tyler

      Disagree – mainly for this reason: “One might speculate that the reason why we have not seen any extraterrestrial civilizations is not because there aren’t any, but because they’re invisible. Maybe there is a secret society of advanced civilizations that know about us but have decided not to contact us until we’re mature enough to be admitted into their club. Perhaps they’re observing us, like animals in a zoo. This is known as the “zoo hypothesis”. However, I don’t think this is likely. On Earth, life has spread to every nook and cranny that can support it. Life goes wherever it can, and that includes the galaxy.” –

      • Mir

        Tim, I also do not support “zoo hypothesis” in this form. Life on Earth has spread to every nook and cranny that can support it. My thesis is the same: Advanced civilizations have spread everywhere. That we do not see them is not evidence (certainly not strong evidence) that they are not here.

        Such advanced civilization would probably exist on very different spatiotemporal scales (as I said sub- atomic and sub-femtosecond or whatever — probably for efficiency reasons) and certainly not being flying spaceshifs around.


      • That they are not exploiting the energy gradients that fuel our civilization is pretty powerful evidence that they are not here, IMO. To find viable scenarios, one has to hypothesize that they are actively hiding. There are some possible scenarios there – but they don’t look terribly probable.

  • Stuart Armstrong

    I know I agreed with this argument when it was first presented to me, but I now suspect it may have a problem – a failure to update on the time since the big bang (indexical temporal information).

    Currently working on a model to see if this problem is genuine.

  • tom

    Does this reevaluation mean that Pr. Hanson has increase his odds that we are in a simulation? Or is he guessing about the rules of the simulation? Or is this part of his performance to remain interesting so that the manager of the simulation keeps him alive?

  • Pablo Stafforini

    The argument doesn’t only show that the filter is very probably ahead of us. It also shows that it is likely to be as further ahead of us as is compatible with our not observing signs of extraterrestrial intelligence. Thus, while I agree that the argument requires us to increase our estimates of eventual (premature) extinction, depending on our prior assumptions it may also require us to decrease our estimates of imminent extinction.

    • This doesn’t seem right to me, at least in the simple versions I’ve thought of. Can you make your argument via box diagrams like Katja did?

      • Pablo Stafforini

        No, Robin, you are right. I thought about the argument a bit more on my way home and realized I has mistaken.

  • Stuart Armstrong

    My post countering this argument can be found at:

  • random guy 27

    Frankly Robin, I’m disappointed.

    (1) Maybe nearly all advanced civilizations don’t care to colonize their galaxies — maybe they spend all their time in pleasant simulated worlds (think holodeck, or the matrix).

    (2) Maybe we’re in such a backwater of the galaxy that the galactic empire doesn’t care to come here.

    (3) Maybe we’re in an “protected ecological reserve” of the galactic empire.

  • dj superflat

    seriously, how can you attempt to conclude anything from something you expect but do not observe, without knowing whether the something is actually there (despite your failure to observe). it’s like someone in the 13th century basing a theory on the absence of a new world (or the fact that the earth is not a sphere, or is the center of the universe), or someone in the 20th century basing a theory on the absence of planets outside this solar system, etc.

    put another way, as others have noted, you have no way to assign any probabilities of anything relevant to the inquiry. aren’t the odds best we are in a simulation? how do you figure the odds we’re not the bacteria in a petrie dish wondering why the universe seems to be finite, lack other bacteria, etc.? how do you figure the odds it’s not god hiding things from us? how do you figure the odds that all advanced civilizations move almost immediately to communicating via gravity waves or somesuch that we can’t detect? how do you figure the odds we’re not in a zoo, or quarantined?

  • Re: “you have no way to assign any probabilities of anything relevant to the inquiry”

    Probabilities are measures of uncertainty; people would be well advised to attach them to all their beliefs.

  • mjgeddes

    Deploying the now famous Geddes transhuman intuition, I’m still confident the SIA-principle does in fact cancel the doomsday argument. Also, I think your argument for time-time-asymmetry was vaguely along the right lines- I belive the SAI-principle can be generalized somehow, and that generalization is what kills all four of the following birds with one stone:

    (a) Anthropic puzzles (e.g doomsday argument)
    (b) Born probabilities
    (c) Puzzles of weighting subjective experiences
    (d) Time asymmetry

    (a), (b) (c) and (d) are all related, and I’m sure the key is some big generalization of SIA which has so far been missed. I leave pesky near-mode details to you folks.

    (Hint: Categoization, analogical inference, and reference classes are the key, not Bayes, and similarity is the important measure, not probability. It is simply obvious that categorization is the key is solving these puzzles and it is more powerful than induction. Any LW/OB participant who has failed to spot these blinking obvious points is not a ‘super clicker’ I’m afraid).

    • While obviously not wanting to fail my “super clicker” grade, I would place the chance of one principle solving all those issues at close to zero.

  • Actually a version this argument can be turned around to show that if your prior probability of living in a universe that repeats ad infinitum (e.g. the universe is stuck in a big infinite loop) is non-zero then you should conclude that in fact you do live in such a universe with probability 1. Indeed, I think this can actually be seen as a fairly serious problem for some kinds of Bayesian notions of science to handle.

    First just consider the sleeping beauty problem. You enroll in some weird experiment and are told that you will be put to sleep then a coin will be fliped if the coin is heads then you are woken up the next day informed the experiment isn’t over yet and then put back to sleep after which your memory of waking up is wiped. If tails you aren’t woken up tomorrow at all. In either case you are woken up the day after tomorrow and put back to sleep until being released 3 days from now. Now when you are woken up during the experiment should you assign probability 1/2 or probability 2/3 to the coin coming up heads. By a similar argument to above there are strong reasons to say it is 2/3 (certainly you should act as if it is 2/3 and not make even odds bets on the coin being tails since you could lose 2 dollars on a heads and win 1 on tails).

    Now apply the same argument to the scenario I gave above. Do the math and you find that merely given the fact that you currently exist you should boost the probability that there are infinitely many copies of yourself (in time/space/whatever) to 1.

  • My diagnosis of the situation is that these all arise out of a fundamental misunderstanding of probability as confidence in something’s truth.

    I mean the problem is that there is no random process that spits you out into society with uniform probability of being any baby (or wakes you up with uniform probability of being in any day of the sleeping beauty paradox). These paradoxes are seductive because [B]we assume that since we can’t say anything in favor of us being born now rather than then we can assign them equal probability.[/B] Really though the notion of probability doesn’t even really make sense here.

    To take apart this paradox let’s me first show why we can’t have shown that the probability of being near the end of civilization is large. Suppose I’m god and I can just magic up these worlds and 999,999 times out of a million I set up the worlds to continue indefinitely with ever expanding population. If I magic up a whole bunch of these worlds and assign souls randomly to such worlds then it turns out that the probability of being in the last generation is very very low.

    Thus since this situation is wholly consistent with what we’ve observed so far we can’t actually have the information necessary to conclude that the probability of the end of days coming soon is high. The error was taking our lack of any reason to distingush being born as any given baby in our universe as justification for supposing that the world is fixed but who you are in it is chosen via a uniform distribution from all the people.

    Probability isn’t magic and it shouldn’t be used interchangeably with confidence. It’s just a fancy way of counting things.

  • Re: “After at first thinking you are in a random box”

    Why on earth would you think you are in a “random” box?!?

    If I rearranged the diagram so the 3 categories now corresponded to totally different time periods – would you *still* think that you are in a random box?

    Drawing arbitrarily-labelled boxes and assigning them equal probability does not seem to be a sensible way to generate priors on this topic.

  • Pingback: Overcoming Bias : Beware Future Filters()

  • Pingback: Overcoming Bias : Hurry Or Delay Ems?()

  • Alexander Gabriel

    Shulman and Bostrom consider and largely dismiss this conclusion because, taken all the way, the SIA argument implies we live in a simulation. Or something. Anyone know why us being in a simulation means AI should be doable here?

    (page 11, footnote)