Future Filter Fatalism

One of the more colorful vignettes in philosophy is Gibbard and Harper’s “Death in Damascus” case:

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.

‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

That is, Death’s foresight takes into account any reactions to Death’s activities.

Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.

GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • http://overcomingbias.com RobinHanson

    It could be that we have a 10% chance of survival even if we do the very best to listen to a warning that a factor of ten filter lies ahead of us, but only a 1% chance of survival if we ignore the warning.

    • dmytryl

       Or it could be that little filter lies ahead of us, but those who for what ever reason estimate huge filter, behave sub optimally and indeed have a larger filter ahead of them.

      • lump1

         Yeah, I also think about this sometimes: Fatalism/defeatism might very well be the *cause* of the failure that we were so sure was coming.

      • John Maxwell IV

        The Great Filter hasn’t always looked this large… in earlier days of the universe, when less time had passed for the galaxy to potentially be colonized, it would’ve looked much smaller.

    • Carl Shulman

      That’s with 1 order of magnitude of filter ahead. But elsewhere you invoke SIA to put almost all the filter, many more orders, ahead, as Katja did here:

      http://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/

      The difference between 10^-13 and 10^-12 may be worthwhile from a risk-neutral total utilitarian view, but both cases are still pretty grim.

      • http://overcomingbias.com RobinHanson

        I don’t recall putting most of the filter ahead of us, and it isn’t obvious to me that Katja does either. SIA gives a big boost to a future filter, relative to other indexical priors, but a parameter prior weighted heavily enough against a future filter will still I think result in a posterior also pretty skeptical of it.

      • Carl Shulman

        I’d like to see you give your pre-SIA prior followed by SIA update.

  • Doug

    A minor point is that every other previous failed civilization lived earlier in time than us. There’s only been a finite amount of time for interstellar civilizations to develop. So if you look out at time T and see a dead universe then your estimate of the great filter magnitude is dependent on when T is. 

    A fledgling civilization in a dead universe 1 billion years after stellar formation doesn’t estimate a big filter, but one 100 billion years after must estimate a very large filter.

    Therefore we would expect that any previous civilization would have most likely been less adaptive to filter than we should be.

  • kurt9

    I would say the only plausible candidate for the filter is the failure to become a space-faring civilization. Once a civilization is sufficiently disbursed in space (even if its just in the solar system – there’s LOTS of room and resources in the outer solar system and Kuiper belt) such a civilization becomes immortal for all practical purposes.

    Do realize that, contrary to the 1970′s dreams of O’neill and the L-5 Society, we have yet to become a space-faring civilization.

    • lump1

       I think that “immortal” is an overstatement. For me, it would require something like the creation of Von Neumann machines that actually work. Sure, they might eat us, and then each other, but it looks like our galaxy has lots of resources to support their limitless spread.

      • Locaha

         Humans are Von Neumann machines.

      • lump1

        Nah, humans are too fragile, high-maintenance and shortlived. I think it’s quite possible that no (biological) human being will ever survive an interstellar trip, the minimum requirement for a Von Neumann machine. Our strength has traditionally been the self-reproduction once we arrive at our destination. Machines have a while to go before they match us in that regard.

  • Vladimir M.

    This seems like a pretty obvious point, even just starting from a realistic assessment of human knowledge. In recent centuries, we have indeed accomplished some impressive things in building useful devices and insight into laws of physics, but when it comes to things that determine the long-term fate of societies, we are still almost completely ignorant. 

    This is true even when it comes to the ordinary kinds of social collapse that have already happened countless times in recorded history (and even within living memory). For most of these, we are still unable to disentangle causes and effects with any confidence. When it comes to hypothetical future threats, I see no good reason to believe that we have anything to work with beyond wild guesses and stabs in the dark. 

    In fact, I would even go so far to argue that we are becoming less and less capable of meaningful general discussion of risks that might plausibly lead to a great filter. Certainly, I can think of several plausible filter mechanisms whose discussion is nowadays far outside the bounds of respectable mainstream discourse (let alone academic discourse). What’s more, it may be that the mechanisms leading to such biasing and narrowing of public discourse are in fact one of the general filter mechanisms, which works by inevitably leading wealthy and technologically advanced civilizations into ever greater delusions, eventually undercutting the basis for their science, technology, and prosperity. 

    (Even if you don’t think my speculation from the above paragraph is plausible — and I’m myself offering it only as a plausible hypothesis — the more general point about our ignorance still stands.) 

    • http://juridicalcoherence.blogspot.com/ srdiamond

      Certainly, I can think of several plausible filter mechanisms whose discussion is nowadays far outside the bounds of respectable mainstream discourse (let alone academic discourse)

      What do you have in mind?

  • John Maxwell IV

    Super-intelligent Friendly AI could be an interesting candidate for such a strategy, since UFAI isn’t the great filter: http://lesswrong.com/r/discussion/lw/g1s/ufai_cannot_be_the_great_filter/  Of course, maybe the Great Filter is evidence that superintelligent AI is very hard, and civilizations generally get wiped out before achieving it.

    • MinibearRex

      Well first, I’m not sure why it’s necessary to suppose that a FAI wouldn’t want to expand into the universe. Obtaining more resources  seems like something evolution would cause intelligent species’ to value (“exploration”), and so those species’ version of a “friendly” AI might share that value, or try to help that civilization do more of it. 

      The other thing is that it seems rather unlikely that civilizations would be more likely to successfully create a FAI than an UFAI. 

  • Robert Koslover

    I see it this way: 1. Both interstellar communication and space travel are difficult, especially the latter. 2. Evolution of intelligent life and development of technological societies are very rare, such that although there may indeed be many such civilizations in our universe, there are very few, if any, anywhere near us.  For example, what if the nearest civilization to us that has achieved merely our own level of existing technology, just happens to be more than (say) 40,000 lightyears away?   We would not notice them and they would not notice us.  And I’m not talking about the time difference (i.e., we would see them from 40,000 years ago) effect here, just the difficulty in sending signals exceeding background noise levels over such enormous distances. Now note that a universe where that number is the typical nearest neighbor distance (for technological worlds) could actually contain millions of such civilizations!  It’s a really big universe!  So my guess is that the great filter is rather simple.  Note that all the giant dinosaurs (not to even mention those oh-so-smart whales!) were quite heavily-evolved animals that lived on Earth for many tens of millions of years, but (so it seems) never even developed any kind of technology.  So even in our own planet’s history, we tech-capable types seem to be very rare indeed.  Now personally, I don’t think we are alone in the universe.  My guess is that there are actually many intelligent civilizations out there, but that so far, we have not seen any confirming signals (let alone visits!) from them simply because they are too far away.  But I do think we should keep looking!

  • http://juridicalcoherence.blogspot.com/ srdiamond

    Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work

    No. This seems to be the same confusion as Yudkowsky’s. The fact that we adopt a measure does not prove that it won’t work. You’re confusing the prior with the likelihood ratio. It probably won’t work, but that low probability is not due to our having chosen the strategy, which could still augment the original prior.

    Your claim is very striking, but it’s wrong. Error removed, it’s as another poster said, obvious.

    • Carl Shulman

      Prior to making any SIA-Great Filter update (if one buys  SIA) one assigns some credence to S working. Call that credence. If one knew that S would not be pursued for exogenous reasons, then the Filter update would not affect one’s beliefs about X, since beliefs about S’ effectiveness would not be entangled with beliefs about the intensity of the filter. There one stays with prior beliefs about S.
      However if one expects S to be pursued, then its effectiveness is tied up with future filter strength, and the Filter-SIA updates reduce the expected effectiveness of S (relative to our initial estimates just looking at S without Filter updates), just as they increase estimates of the danger of various colonization showstoppers.

      If we adopt S we move from the former case to the latter, and so reduce our credence in the effectiveness of S. We might think that our chance of survival goes up a bit, as mentioned in the post, because even the reduced effectiveness is non-zero, but the estimated effectiveness does decline.

      What part of this are you contesting?

      • http://juridicalcoherence.blogspot.com/ srdiamond

        You had written:

        Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work

        This is clearly wrong because the quality of the strategy will affect whether we adopt it. That was my point, based on what you had written.

        Now, you say,

        If one knew that S would not be pursued for exogenous reasons, then the Filter update would not affect one’s beliefs about X, since beliefs about S’ effectiveness would not be entangled with beliefs about the intensity of the filter. [emphasis added.] 

        “Exogenous,” I take it, means that it is unrelated to the quality of the strategy.

        That’s OK, but it’s not what you wrote. It is false that “the mere fact” that we adopt any filter-avoiding strategy is strong evidence it won’t work. [Leaving aside where you get that the evidence, even in the rephrased version, is "strong."]

        Similarly, Eliezer Yudkowsky had written in another thread:

        the possibility of an intelligence explosion, Friendly or unFriendly or global-economic-based or what-have-you, resembles the prospect of molecular nanotechnology in that it makes the Great Filter more puzzling, not less.  I don’t view this as a particularly strong critique of UFAI or intelligence explosion, because even without that the Great Filter is *still* very puzzling – it’s already very mysterious. [emphasis added.]

        This is a non-sequitur, in which the low credence is confused with the likelihood ratio. The “great filter” would seem to render the intelligence explosion much less likely; in any event, the low credence has nothing to do with it.

      • Carl Shulman

        “This is clearly wrong because the quality of the strategy will affect whether we adopt it.”

        No, that’s the initial estimate of the quality of S. You can’t double-count the same non-anthropics/filter estimate of quality.

      • dmytryl

        The strategy could improve survival by the factor of, say, 1000, and yet still there could be silence due to the rest of the filter. Do you count this as strategy ‘not working’?

        Anyhow, I don’t think we have a lot of disaster type filter left. Less than 50% chance of that, I’d say. There can be other types of filter, such as the spatial expansion being as stupid of idea as a mountain of mammoth tusks to reach the moon.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        No, that’s the initial estimate of the quality of S. You can’t double-count the same non-anthropics/filter estimate of quality.

        Where in the OP do you intimate anything about an initial estimate of quality?

        You apparently think this procedure is equivalent to “the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work,” when you haven’t specified anything about S. Or else you simply didn’t say what you mean and are loathe to admit it or revise. I’m challenging what you wrote, not the gloss you choose subsequently to give it. 

        The term “exogenous” plays a crucial role in that gloss, but it’s nowhere implied in the OP. Either you don’t understand the difference or don’t understand the difference in expression.

      • Carl Shulman

        Dmytry,

        “The strategy could improve survival by the factor of, say, 1000, and yet still there could be silence due to the rest of the filter. Do you count this as strategy ‘not working’?”
        As I said in the post, one could improve chances significantly (in ratio terms) so long as the survival chance afterward remained low. And as I said in comments, such a shift could be important from a utilitarian point of view, but it still would almost certainly ‘not work’ in the sense of actually enabling colonization, and both senses are worth understanding for those who buy the arguments for big future filters.

        “Anyhow, I don’t think we have a lot of disaster type filter left. Less than 50% chance of some sort of drastic kaput, I’d say.”

        My impression of the evidence for future filters in the aggregate, setting aside SIA or other anthropic doomsday arguments, is similar. But if you assume past filters can be strongly discounted on a priori anthropic grounds, as Robin does, the probability goes up. I see a lot of problems with that type of anthropic argument, but SIA-style approaches to anthropic problems command at least plurality support among academics who have addressed the question, and have a number of theoretical virtues, so I wouldn’t dismiss it.

        “spatial expansion being as stupid of idea”

        Why? Even if the people running the civilization stay home, probes could send back the results of computations/experiments, ship some material goods (much more slowly and inefficiently, and from a more limited region of space), and provide defense services. Not to mention any desires to have more descendants, or altruism or the like.

        Colonization might produce divergent colonies that threaten home, or be disvalued on ideological grounds. Do you have any other particular reasons to expect it would be a bad idea?

      • dmytryl

        Carl Shulman:

        I also subscribe to something equivalent to SIA, I figure. I frame it differently, though. The question is not into which observer your soul incarnates. The correct question is what is the world around me like, which can be figured out using the sensory evidence including the internal evidence (feeling yourself think).

        For example, let’s consider sleeping beauty problem. First let’s consider the case where if the coin falls heads, sleeping beauty is woken on Monday as usual, then on Tuesday and told it is Tuesday, and if the coin falls tails, sleeping beauty is woken on Monday and Tuesday but not told it’s Tuesday. We’ll assume waking up on Monday is equiprobable to waking up on Tuesday. Sleeping beauty wakes up. She isn’t told it’s Tuesday. P(heads|not told) = P(heads)*P(not told | heads) / P(not told) = 0.25/0.75 = 1/3 . Now, if she’s instead asleep on the Heads and Tuesday, that changes absolutely nothing about processing of the evidence she has when awake; merely impedes processing of the evidence when she’s asleep.

        Note that this does not imply the abiogenesis filter is necessarily small. The only theory that we got, in the rigorous sense of the word “theory”, is that some pretty complicated proto bacterium sprang to life via an incredibly unlucky thermal coincidence (which was enormously aided by presence of the amino acids and other basic building blocks). We have handwaving that it may have been something different, but no concrete chain of events that would lead to life. You can’t use data as evidence in favour of handwaving over a theory.

      • Carl Shulman

        Dmytry, 

        Yes, ordinary Bayesian conditioning on the fact that your experiences occurred gives SIA-like conclusions:

        http://arxiv.org/abs/math/0608592

        If I see a fly moving on the wall when I wake up (in random fashion) as Sleeping Beauty, then any specific fly position is more likely to be seen by someone if there are more awakenings.

        One only needs an indexical supplement to this (updating on the fact that these are your observations, rather than that the observations occurred) to distinguish between possible worlds that each contain your observations, but may differ in the frequency of such situations.

        “Note that this does not imply the abiogenesis filter is necessarily small. The only theory that we got, in the rigorous sense of the word “theory”, is that some pretty complicated proto bacterium sprang to life via an incredibly unlucky thermal coincidence (which was enormously aided by presence of the amino acids and other basic building blocks).”

        Abiogenesis actually occurred pretty quickly after conditions allowed it, but I’ll take it as a stand-in for difficulties in evolution of life.

        Consider two hypotheses. On theory A, abiogenesis is so hard as to occur only on 1 in 10^1020 planets or even less frequently, but on theory B abiogenesis occurs on 1 in 10^20 planets. Then finding ourselves alive gives a 10^1000:1 update against A and for B. One order of magnitude in frequency of evolution of creatures like us can make up for an order of magnitude in prior probability.

        So as long as we assign some nontrivial probability to easy abiogenesis (and otherwise small early filters) SIA-like reasoning will rule out almost all of the hard early filter steps. And given our uncertainty about those early steps we are not in a position to claim 99.999%+ confidence about difficult early evolution of life.

      • dmytryl

        Carl Shulman:

        One only needs an indexical supplement to this (updating on the fact
        that these are your observations, rather than that the observations
        occurred) to distinguish between possible worlds that each contain your
        observations, but may differ in the frequency of such situations.

        You should consider a world containing multitude of your observations as multitude of worlds where your observations are specifically your, the picking of observer being part of the theory. This makes it clear this is a question of priors.

        Early appearance of life:
        You know what else, besides appearance of life, may be near edge of a timespan? Our own existence vs the end of habitability of Earth. May be <500 millions years left. http://www.psu.edu/ur/2000/oceans.html

        Billions years is a long time for a star. There are small stars that burn very long, but those suffer intense flares. And the star doesn't merely have to burn for that long – the planet has to stay habitable through the change in the star's temperature – the sun should have brightened by whooping 40% since the beginning.
        The bulk of time for life formation for successful evolution of intelligent species may well be within first 500 millions years on young stars of solar size rather than several billions of life for sub-solar mass stars. We really need more data about small stars and their planets to rule that out.

        The long term maintenance of habitability may well be dependent on very rare coincidences. Stars don't burn evenly.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        I had written,

        Where in the OP do you intimate anything about an initial estimate of quality?

        Which you didn’t answer. I can only guess that you thought the point purely verbal. (Not that this should be excusatory: your confused prose contributes to your confused thought.)

        Here’s the significance. Once you realize that unless you had characterized the avoider based on its endogenous effectiveness deciding to use a given avoider doesn’t necessarily lower the probability of success.

        This leaves you with a formal theorem with dubious application to anything. The theorem is that if you can distinguish the endogenous and exogenous causes for not adopting an avoider the probability of succeeding depends not only on the endogenous causes (favorably) but on the exogenous causes (unfavorably). But distinguishing them involves both conceptual and empirical problems; I don’t know that even the conceptual distinction can be drawn.

        What it does NOT mean is that:

        To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits.

        We don’t have to be very confident of both if we’re confident enough for one. They’re compensatory factors.

        But there’s another confusion I might as well remark on. In evaluating the exogenous component, we would not be evaluating for any strategy as good as g. There you’re introducing exogenous considerations into evaluating the endogenous component. This confusion shows that you didn’t fully understand the role of the endogenous-exogenous distinction when you posted–that the confusion wasn’t purely verbal (although that’s bad enough to warrant criticism).

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Correction. You’re introducing endogenous considerations into evaluating the exogenous factor.

        Minor correction: “any strategy as good as S.”

  • AspiringRationalist

    Pardon the ignorant question, but what does SIA stand for?

  • Neil Shepperd

    While true, I *think* this doesn’t actually have any effect on what we should actually do about the Great Filter. Assuming most civilizations eventually follow the same reasoning as us, whatever strategy we actually use will be the same strategy we would have expected other civilizations to have used. So the effect is a “constant factor” of evidence saying “this probably won’t work” for *all* our options.

    This factor presumably drops out of the expected utility calculation, or at least doesn’t change the *ordering* of expected utilities, so we should still just pick whatever looks most inside-view-likely to help our survival.

  • Robert Wiblin

    So we should place more emphasis on trying to do things we think other civilizations won’t have tried (even given the fact that they might have tried this strategy as well). But how are we to know what those are?

    • FormerRationalist

      No. We simply shouldn’t trust any argument based upon anthropics until we have actually sorted anthropics out.

  • FormerRationalist

    I think SIA-type arguments are fishy. Look, SIA says that you should believe more in hypotheses about the world which say that there are lots of other “yous” or “you-like experiences”.
     
    Essentially the perfect hypothesis from the point of view of SIA is one where reality consists of an extremely large number N of brains-in-vats having the “you” experience. (Let us ignore the possibility that reality could be infinitely large, for now).
     
    So, even if you assign a tiny prior to the aforementioned hypothesis, there will exist a sufficiently large N that you should assign, for example,  > a 50% chance to the N-brains-in-vats reality, as opposed to the ordinary hypothesis where there is just one of you living normally on earth.

    I consider this to be a reductio ad absurdum of SIA-based anthropics.

    • FormerRationalist

      By the way, the argument I have presented here is just a variant of the presumptuous philosopher problem.

    • dmytryl

      Or the simulation ‘arguments’. It boils down to one thing: numbers that can be produced by theory grow faster than any computable function of size of the theory, if the theory is expressed in any Turing-complete grammar. (Same can be trivially extended to second order theories).

      A final predictive theory is the theory describing world around me (or around us). A theory with multiple observers like me is equivalent to a group of theories of world around me, and this group needs not have higher probability (it sometimes can, though). Speaking of which if you ever end up having your ‘probabilities’ add up to more than 1, that doesn’t mean you can normalize them to add to 1, that means you should go back and look where you screwed up the math.

      • FormerRationalist

        “It boils down to one thing: numbers that can be produced by theory grow faster than any computable function of size of the theory”

        - yes, exactly. Couldn’t have put it better myself.

      • dmytryl

         More insidiously still, within the Turing-complete laws of physics, random fluctuations produce numbers that outgrow the improbability of the coincidence. Inside a neutron star, a computer can form, and simulate very large number of intelligent beings – larger than the improbability of the formation of the computer in question.

      • FormerRationalist

        “within the Turing-complete (or better) laws of physics, random fluctuations may be able to produce numbers that outgrow the improbability of the coincidence”

         - that is actually the best a priori, non-theistic argument for our existence that I have ever heard. The only assumption it really relies on is full Turing completeness, which in particular implies that the universe is in some sense infinite.

        And indeed it is plausible that eternal inflation is exactly what you described: a special kind of self-replicating process which (by chance) (sometimes) produces life, and produces an infinity of life at that.

        http://en.wikipedia.org/wiki/Eternal_inflation

    • http://juridicalcoherence.blogspot.com/ srdiamond

      SIA-type claims contain a ceteris paribus clauses. Is there any reason to think the inferential weight of the theorems is substantial in the real world?

      This is why I could never even become mildly interested in these arguments. Unless my take is wrong, they’re debating an argument that really carries very little weight in drawing any conclusion. They invoke considerations that are (at best) formally relevant but aren’t theoretically relevant. If the argument worked, it would cary little weight because it is only one very abstract consideration among an overwhelming weight of potential countervailing considerations, swept under the ceteris paribus clause.

      • dmytryl

        Very good point on the ceteris paribus. The issue is that ‘all else’ is not equal across those “theories” where counts of people are compared, and so formally, they should carry pretty much zero weight for breaking the very assumptions they logically require.

  • http://entitledtoanopinion.wordpress.com TGGP

    An off-topic but interesting quote from that Damascus link:
    “This is exactly the reasoning that leads to taking one box in Newcomb’s problem, and one boxing is wrong. (If you don’t agree, then you’re not going to be in the target audience for this post I’m afraid.)”
    I never followed the Omega discussion all that closely, but isn’t one-boxing normative at LW?

    • dmytryl

      One boxing will happen given enough non normative assumptions about predictor. For example, if predictor is simulating me, and I want real me to get paid, the situation is that I end up in uncertain world state – the world around me is either simulated, or real world, which can make me one-box. Other way is to self identify with the algorithm, and in this case your actions have causal consequences inside predictor and inside real world. Self identifying with algorithm is natural for software while self identifying with particular hardware requires extra work such as some unique ID and other such things; opposite is true for humans where self identifying with hardware comes innate.

      The normative predictor, I gather, works rather like charisma of King David in the Gibbard and Harper’s paper.

      The LW stance is, well, heil Yudkowsky as decision theorist for calling Hofstadter’s superrationality “timeless decision theory” and for having “formalized” it (which didn’t seem to happen yet). This is all really stupid with a dash of very questionable research ethics, and perhaps one of the best reasons not to take these folks seriously.

  • Locaha

    >>>Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars.

    Why? Oh merciful Cthulhu, why should I suppose this? There is no empirical evidence that any civilization but ours exists. Why would you assume, upon seeing a single dot on a blank map, that the map contains billions of invisible dots?

    PS. Our universe also seems to lack Flying Spaghetti Monsters. Surely some great filter must destroy them before they reach Earth…