Search Results for: great filter

Schulze-Makuch & Bains on The Great Filter

In their 2016 journal article “The Cosmic Zoo: The (Near) Inevitability of the Evolution of Complex, Macroscopic Life“, Dirk Schulze-Makuch and William Bains write:

An important question is … whether there exists what Robin Hanson calls “The Great Filter” somewhere between the formation of planets and the rise of technological civilizations. …

Our argument … is that the evolution of complex life [from simple life] is likely … [because] functions found in complex organisms have evolved multiple times, an argument we will elaborate in the bulk of this paper … [and] life started as a simple organism, close to [a] “wall” of minimum complexity … With time, the most complex life is therefore likely to become more complex. … If the Great Filter is at the origin of life, we live in a relatively empty universe, but if the origin of life is common, we live in a Cosmic Zoo where such complex life is abundant.

Here they seem to say that the great filter must lie at the origin of life, and seem unclear on if it could also lie in our future.

In the introduction to in their longer 2017 book, The Cosmic Zoo: Complex Life on Many Worlds, Schulze-Makuch and Bains write:

We see no examples of intelligent, radio-transmitting, spaceship-making life in the sky. So there must be what Robin Hanson calls ‘The Great Filter’ between the existence of planets and the occurrence of a technological civilisation. That filter could, in principle, be any of the many steps that have led to modern humanity over roughly the last 4 billion years. So which of those major steps or transitions are highly likely and which are unlikely? …

if the origin of life is common and habitable rocky planets are abundant then life is common, and we live in a Cosmic Zoo. … Our hypothesis is that all major transitions or key innovations of life toward higher complexity will be achieved by a sufficient large biosphere in a semi-stable habitat given enough time. There are only two transitions of which we have little insight and much speculation—the origin of life itself, and the origin (or survival) of technological intelligence. Either one of these could explain the Fermi Paradox – why we have not discovered (yet) any sign of technologically advanced life in the Universe.

So now they add that (part of) the filter could lie at the origin of human-level language & tech. In the conclusion of their book they say:

There is strong evidence that most of the key innovations that we discussed in… this book follow the Many Paths model. … There are, however, two prominent exceptions to our assessment. The first exception is the origin of life itself. … The second exception … is the rise of technologically advanced life itself. …The third and least attractive option is that the Great Filter still lies ahead of us. Maybe technological advanced species arise often, but are then almost immediately snuffed out.

So now they make clear that (part of) the filter could also lie in humanity’s future. (Though they don’t make it clear to me if they accept that we know the great filter is huge and must lie somewhere; the only question is where it lies.)

In the conclusion of their paper, Schulze-Makuch and Bains say:

We find that, with the exception of the origin of life and the origin of technological intelligence, we can favour the Critical Path [= fixed time delay] model or the Many Paths [= independent origins] model in most cases. The origin of oxygenesis, may be a Many Paths process, and we favour that interpretation, but may also be Random Walk [= long expected time] events.

So now they seem to also add the ability to use oxygen as a candidate filter step. And earlier in the paper they also say:

We postulate that the evolution of a genome in which the default expression status was “off” was the key, and unique, transition that allowed eukaryotes to evolve the complex systems that they show today, not the evolution of any of those control systems per se. Whether the evolution of a “default off” logic was a uniquely unlikely, Random Walk event or a probable, Many Paths, event is unclear at this point.

(They also discuss this in their book.) Which adds one more candidate: the origin of the eukaryote “default off” gene logic.

In their detailed analyses, Schulze-Makuch and Bains look at two key indicators: whether a step was plausibly essential for the eventual rise of advanced tech, and whether we can find multiple independent origins of that step in Earth’s fossil record. These seem to me to both be excellent criteria, and Schulze-Makuch and Bains seem to expertly apply them in their detailed discussion. They are a great read and I recommend them.

My complaint is with Schulze-Makuch and Bains’ titles, abstracts, and other summaries, which seem to arbitrarily drop many viable options. By their analysis criteria, Schulze-Makuch and Bains find five plausible candidates for great filter steps along our timeline: (1) life origin ~3.7Gya, (2) oxygen processing ~3.1Gya (3) Eukaryote default-off genetic control ~1.8Gya, (4) human-level language/tech ~0.01Gya, and (5) future obstacles to our becoming grabby. With five plausible hard steps, it seems unreasonable to claim that “if the origin of life is common, we live in a Cosmic Zoo where such complex life is abundant”.

Schulze-Makuch and Bains seem to justify dropping some of these options because they don’t “favour” them. But I can find no explicit arguments or analysis in their article or book for why these are less viable candidates. Yes, a step being essential and only having been seen once in our history only suggests, but hardly assures, that this is a hard step. Maybe other independent origins happened, but have not yet been seen in our fossil record. Or maybe this did only happen once, but that was just random luck and they could easily have happened a bit later. But these caveats are just as true of all of Schulze-Makuch and Bains’ candidate steps.

I thus conclude that we know of four plausible and concrete candidates for great filter steps before our current state. Now I’m not entirely comfortable with postulating a step very recently, given the consistent trend in increasing brain sizes over the last half billion years. But Schulze-Makuch and Bains do offer plausible arguments for why this might in fact have been an unlikely step. So I accept that they have found four plausible hard great filter steps in our past.

The total number of hard steps in the great filter sets the power in our power law model for the origin of grabby aliens. This number includes not only the hard filter steps that we’ve found in the fossil record of Earth until now, but also any future steps that we may yet encounter, any steps on Earth that we haven’t yet noticed in our fossil record, and any steps that may have occurred on a prior “Eden” which seeded Earth via panspermia. Six steps isn’t a crazy middle estimate, given all these considerations.

GD Star Rating
loading...
Tagged as:

Try-Try or Try-Once Great Filter?

Here’s a simple and pretty standard theory of the origin and history of life and intelligence. Life can exist in a supporting oasis (e.g., Earth’s surface) that has a volume V and metabolism M per unit volume, and which lasts for a time window W between forming and then later ending. This oasis makes discrete “advances” between levels over time, and at any one time the entire oasis is at the same level. For example, an oasis may start at the level of simple dead chemical activity, may later rise to a level that counts as “life”, then rise to a level that includes “intelligence”, and finally to a level where civilization makes a big loud noises that are visible as clearly artificial from far away in the universe.

There can be different kinds of levels, each with a different process for stepping to the next level. For example, at a “delay” level, the oasis takes a fixed time delay D to move to the next level. At a “try once” level, the oasis has a particular probability of immediately stepping to the next level, and if it fails at that it stays forever “stuck”, which is equivalent to a level with an infinite delay. And at a “try try” level, the oasis stays at a level while it searches for an “innovation” to allow it to step to the next level. This search produces a constant rate per unit time of jumping. As an oasis exists for only a limited window W, it may never reach high levels, and in fact may never get beyond its first try-try level.

If we consider a high level above many hard try-try levels, and with small enough values of V,M,W, then any one oasis may have a very small chance of “succeeding” at reaching that high level before its window ends. In this case, there is a “great filter” that stands between the initial state of the oasis and a final success state. Such a success would then only tend to happen somewhere if there are enough similar oases going through this process, to overcome these small odds at each oasis. And if we know that very few of many similar such oases actually succeed, then we know that each must face a great filter. For example, knowing that we humans now can see no big loud artificial activity for a very long distance from us tells us that planets out there face a great filter between their starting level and that big loud level.

Each try-try type level has an expected time E to step to the next level, a time that goes inversely as V*M. After all, the more volume there is of stuff that tries, and faster its local activity, the more chances it has to find an innovation. A key division between such random levels is between ones in which this expected time E is much less than, or much greater than, the oasis window W. When E << W, these jumps are fast and “easy”, and so levels change relatively steadily over time, at a rate proportional to V*M. And when E >> W, then these jumps are so “hard” that most oases never succeed at them.

Let us focus for now on oases that face a great filter, have no try-once steps, and yet succeed against the odds. There are some useful patterns to note here. First, let’s set aside S, the sum of the delays D for delay steps, and of the expected times E for easy try-try steps, for all such steps between the initial level and the success level. Such an oasis then really only has a time duration of about W-S to do all its required hard try-try steps.

The first pattern to note is that the chance that an oasis does all these hard steps within its window W is proportional to (V*M*(W-S))N, where N is the number of these hard steps needed to reach its success level. So if we are trying to predict which of many differing oases is mostly likely to succeed, this is the formula to use.

The second pattern to note is that if an oasis succeeds in doing all its required hard steps within its W-S duration, then the time durations required to do each of the hard steps are all drawn from the same (roughly exponential) distribution, regardless of the value of E for those steps! Also, the time remaining in the oasis after the success level has been reached is also drawn from this same distribution. This makes concrete predictions about the pattern of times in the historical record of a successful oasis.

Now let’s try to compare this theory to the history of life on Earth. The first known fossils of cells seems to be from 0.1-0.5 Ga (billion years) after life would be possible on Earth, which happened about 4.2 Gya (billion years ago), which was about 9.6 Ga after the universe formed. The window remaining for (eukaryotic) life to remain on Earth seems 0.8-1.5 Ga. The relatively steady growth in max brain sizes since multi-cellular life arose 0.5 Gya suggests that during this period there were many easy, but no hard, try-try steps. Multi-celluar life seems to require sufficient oxygen in the atmosphere, but the process of collecting enough oxygen seems to have started about 2.4 Gya, implying a long 1.9 Ga delay step. Prokaryotes started exchanging genes about 2.0 Gya, eukaryotes appeared about 1.7 Gya, and modern sex appeared about 1.2 Gya. These events may or may not have been the result of successful try-try steps.

Can we test this history against the predictions that try-try hard step durations, and the window time remaining, should all be drawn from the same roughly exponential distribution? Prokaryote sex, eukaryotes, and modern sex all appeared within 0.8 Ga, which seems rather close together, and leaving a long uneventful period of almost ~2 Ga before them. The clearest hard step duration candidates are before the first life, which took 0.0-0.5 Ga, and the window remaining of 0.8-1.5 Ga, which could be pretty different durations. Overall I’d say that while this data isn’t a clear refutation of the same hard step distribution hypothesis, it also isn’t that much of a confirmation.

What about the prediction that the chance of oasis success is proportional to (V*M*(W-S))N? The prediction about Earth is that it will tend to score high on this metric, as Earth is the only example of success that we know.

Let’s consider some predictions in turn, starting with metabolism M. Life of the sort that we know seems to allow only a limited range of temperatures, and near a star that requires a limited range of distances from the star, which then implies a limited range of metabolisms M. As a result of this limited range of possible M, our prediction that oases with larger M will have higher chances of success doesn’t have much room to show itself. But for what its worth, Earth seems to be nearer to the inner than outer edge of the Sun’s allowable zone, giving it a higher value of M. So that’s a weak confirmation of the theory, though it would be stronger if the allowed zone range were larger than most authors now estimate.

What about volume V? The radii of non-gas-giant planets seems to be lognormally distributed, with Earth at the low end of the distribution (at a value of 1 on this axis):

So there are many planets out there (at r=4) with 16 times Earth’s surface area, and with 64 times the volume, ratios that must be raised to the power of N to give their advantage over Earth. And these larger planets are made much more of water than is Earth. This seems to be a substantial, if perhaps not overwhelming, disconfirmation of the prediction that Earth would score high on VN. The higher is the number of hard steps N, the stronger is this disconfirmation.

Regarding the time window W, I see three relevant parameters: when a planet’s star formed, how long that star lasts, and how often there are supernova nearby that destroy all life on the planet. Regarding star lifetimes, main sequence star luminosity goes as mass to the ~3.5-4.0 power, which implies that star lifetimes go inversely as mass to the ~2.5-3.0 power. And as the smallest viable stars have 0.08 of our sun’s mass, that implies that there are stars with ~500-2000 times the Sun’s lifetime, an advantage that must again be raised to the power N. And there are actually a lot more such stars, 10-100 times more than of the Sun’s size:

However, the higher metabolism of larger mass stars gives them a spatially wider habitable zone for planets nearby, and planets near small stars are said to face other problems; how much does that compensate? And double stars should also offer wider habitable zones; so why is our Sun single?

Now what if life that appears near small long-lived stars would appear too late, as life that appeared earlier would spread and take over? In this case, we are talking about a race to see which oases can achieve intelligence or big loud civilizations before others. In which case, the prediction is that winning oases are the ones that appeared first in time, as well has having good metrics of V,M,W.

Regarding that, here are estimates of where the habitable stars appear in time and galactic radii, taking into account both star formation rates and local supernovae rates (with the Sun’s position shown via a yellow star):

As you can see, our Sun is far from the earliest, and its quite a bit closer to galactic center than is ideal for its time. And if the game isn’t a race to be first, our Sun seems much earlier than is ideal (these estimates are arbitrarily stopped at 10Ga).

Taken together, all this seems to me to give a substantial disconfirmation of the theory that chance of oasis success is proportional to (V*M*(W-S))N, a disconfirmation that gets stronger the larger is N. So depending on N, maybe not an overwhelming disconfirmation, but at least substantial and worrisome. Yes, we might yet discover more constraints on habitability to explain all these, but until we find them, we must worry about the implications of our analysis of the situation as we best understand it.

So what alternative theories do we have to consider? In this post, I’d like to suggest replacing try-try steps with try-once steps in the great filter. These might, for example, be due to evolution’s choices of key standards, such as the genetic code, choices that tend to lock in and get entrenched, preventing competing standards from being tried. The overall chance of success with try-once steps goes as the number of oases, and is independent of oasis lifetime, volume, or metabolism, favoring many small oases relative to a few big ones. With more try-once steps, we need fewer try-try steps in the great filter, and thus N gets slower, weakening our prediction conflicts. In addition, many try-once steps could unproblematically happen close to each other in time.

This seems attractive to me because I estimate there to be in fact a great many rather hard steps. Say at least ten. This is because the design of even “simple” single cell organisms seems to me amazingly complex and well-integrated. (Just look at it.) “Recent” life innovations like eukaryotes, different kinds of sex, and multicellular organisms do involved substantial complexity, but the total complexity of life seems to me far larger than these. And while incremental evolution is capable of generating a lot of complexity and integration, I expect that what we see in even the simplest cells must have involved a lot of hard steps, of either the try-once or the try-try type. And if they are all try-try steps, that makes for a huge N, which makes the prediction conflicts above very difficult to overcome.

Well that’s enough for this post, but I expect to have more to say on the subject soon.

Added 19Jan: Turns out we also seem to be in the wrong kind of galaxy; each giant elliptical with a low star formation rate hosts 100-10K times more habitable Earth-like planets, and a million times as many habitable gas giants, than does our Milky Way.

GD Star Rating
loading...
Tagged as: ,

Great Filter, 20 Years On

Twenty years ago today, I introduced the phrase “The Great Filter” in an essay on my personal website. Today Google says 300,000 web pages use this phrase, and 4.3% of those mention my name. This essay has 45 academic citations, and my related math paper has 17 cites.

These citations are a bit over 1% of my total citations, but this phrase accounts for 5% of my press coverage. This press is mostly dumb luck. I happened to coin a phrase on a topic of growing and wide interest, yet others more prestigious than I didn’t (as they often do) bother to replace it with another phrase that would trace back to them.

I have mixed feelings about writing the paper. Back then I was defying the usual academic rule to focus narrowly. I was right that it is possible to contribute to many more different areas than most academics do. But what I didn’t fully realize is that to academic economists non-econ publications don’t exist, and that publication is only the first step to academic influence. If you aren’t around in an area to keep publishing, giving talks, going to meetings, doing referee reports, etc., academics tend to correctly decide that you are politically powerless and thus you and your work can safely be ignored.

So I’m mostly ignored by the academics who’ve continued in this area – don’t get grants, students, or invitations to give talks, to comment on paper drafts, or to referee papers, grants, books, etc. The only time I’ve ever been invited to talk on the subject was a TEDx talk a few years ago. (And I’ve given over 350 talks in my career.) But the worst scenario of being ignored is that it is as if your paper never existed, and so you shouldn’t have bothered writing it. Thankfully I have avoided that outcome, as some of my insights have been taken to heart, both academically and socially. People now accept that finding independent alien life simpler than us would be bad news, that the very hard filter steps should be roughly equally spaced in our history, and that the great filter gives a reason to worry about humanity’s future prospects.

GD Star Rating
loading...
Tagged as: , ,

Great Filter TEDx

This Saturday I’ll speak on the great filter at TEDx Limassol in Cyprus. Though I first wrote about the subject in 1996, this is actually the first time I’ve been invited to speak on it. It only took 19 years. I’ll post links here to slides and video when available.

Added 22Sep: A preliminary version of the video can be found here starting at minute 34.

Added 12Dec: The video is finally up:

GD Star Rating
loading...
Tagged as: , ,

Adam Ford & I on Great Filter

Adam Ford interviewed me again, this time on the Great Filter:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

GD Star Rating
loading...
Tagged as: , ,

Try-Menu-Combo Filter Steps

A great filter stands between simple dead matter and a visible expanding lasting civilization. Many hard steps (and also easy ones) must be passed to make it through this filter. But what kind of steps are these hard steps?

The first kind of steps that most people imagine are try-try steps. The local system must keep trying random variations at a constant rate until a successful one is found. Here the chance per unit time is a constant, and the chance of success by a time is linear, at least for small times. When a system must go through many hard steps by time t, but that success is quite unlikely, then for n hard steps the chance of that unlikely success by time t goes as tn.

I recently pointed out that there’s another kind of hard step: try-once. Here the local system has only one chance; if it fails then, it fails forever. For these sort of steps, the chance of success doesn’t increase with time trying.

In this post, I want to point out that there are worse kinds of steps than try-try steps. Such as try-menu-combo steps.

Imagine that to pass some important step, evolution needed to create a species with a particular combination of eyes, hands, feet, stomach, ears, etc. Except that the available menu for each of these parts increased linearly with time.

For example, at first there is only one kind of stomach available. All species must use that kind of stomach. Then there are two kinds, and then three. Which kind of stomach is the next to be added to the stomach menu is pretty random. But there is zero chance of achieving his menu-combo next step until the right kind of stomach is added to the menu.

In this scenario, having the right kind of stomach on the menu is far from enough. The system also needs to add the right kind of eyes to the eye menu, and so on. Once all of the right kinds of items are on the right menus, then the last thing needed is a try-try step, to create a specific species that includes all the right parts via randomly combining menu items.

If there were just one kind of part needed, the chance of success by some date would increase linearly with time, making this an ordinary try-try step. But if there were two kinds of parts needed, chosen from two menus, then the chance would go as t2. With three menus, it is t3. And so on.

So now we can see that the tn rule for the chance of many hard steps by time t can be generalized. Now instead of n being the number of hard steps, n becomes the sum of powers m for each of the hard steps. Step power m is zero for a try-once step, is near one for a try-try step, and is greater than one for try-menu-combo steps.

In terms of its contribution to the tn power law for completing all the hard steps, a try-menu-combo step is the equivalent of several try-try steps all happening at the same time. That is, great filter hard steps can in some sense happen in parallel, as well as in sequence.

With ordinary try-try steps, one only sees progress in the history record when steps are passed. So looking at the many forms of progress we’ve seen in the past half billion years through the lens of try-try steps, one concludes that these were many easy try-try steps, and so contained no hard steps.

But what if some sort of combo step has been happening instead? During a menu-combo step, one should see the progress of increasingly long menus for each of the parts. And yet it could still be a very hard step, the equivalent of many hard try-try steps happening in parallel. Maybe something about humans was a hard step after all?

Can anyone think of other plausible mechanisms by which hard steps could have a tm dependence, for m > 1?

Added 10a: I expect that an m power step will be completed on average in m/(n+1) of the available window for life on Earth, where n is the total power of the steps done on Earth. So that’s still a problem for having a lot happen in the last half billion years.

GD Star Rating
loading...
Tagged as:

Two Types of Future Filters

In principle, any piece of simple dead matter in the universe could give rise to simple life, then to advanced life, then to an expanding visible civilization. In practice, however, this has not yet happened anywhere in the visible universe. The “great filter” is sum total of all the obstacles that prevent this transition, and our observation of a dead universe tells us that this filter must be enormous.

Life and humans here on Earth have so far progressed some distance along this filter, and we now face the ominous question: how much still lies ahead? If the future filter is large, our changes of starting an expanding visible civilization are slim. While being interviewed on the great filter recently, I was asked what I see as the most likely future filter. And in trying to answer, I realized that I have changed my mind.

The easiest kind of future filter to imagine is a big external disaster that kills all life on Earth. Like a big asteroid or nearby supernovae. But when you think about it, it is very hard to kill all life on Earth. Given how long Earth as gone without such an event, the odds of it happening in the next millions years seems quite small. And yet a million years seems plenty of time for us to start an expanding visible civilization, if we were going to do that.

Yes, compared to killing all life, we can far more easily imagine events that destroy civilization, or kill all humans. But the window for Earth to support life apparently extends another 1.5 billion years into our future. As that window duration should roughly equal the typical duration between great filter steps in the past, it seems unlikely that any such steps have occurred since a half billion years ago, when multicellular life started becoming visible in the fossil record. For example, the trend toward big brains seems steady enough over that period to make big brains unlikely as a big filter step.

Thus even a disaster that kills most all multicellular life on Earth seems unlikely to push life back past the most recent great filter step. Life would still likely retain sex, Eukaryotes, and much more. And with 1.5 billion years to putter, life seems likely to revive multicellular animals, big brains, and something as advanced as humans. In which case there would be a future delay of advanced expanding life, but not a net future filter.

Yes, this analysis is regarding “try-try” filter steps, where the world can just keep repeatedly trying until it succeeds. In principle there can also be “first or never” steps, such as standards that could in principle go many ways, but which lock in forever once they pick a particular way. But it still seems hard to imagine such steps in the last half billion years.

So far we’ve talked about big disasters due to external causes. And yes, big internal disasters like wars are likely to be more frequent. But again the problem is: a disaster that still leaves enough life around could evolve advanced life again in 1.5 billion years, resulting in only a delay, not a filter.

The kinds of disasters we’ve been considering so far might be described as “too little coordination” disasters. That is, you might imagine empowering some sort of world government to coordinate to prevent them. And once such a government became possible, if it were not actually created or used, you might blame such a disaster in part on our failing to empower a world government to prevent them.

Another class of disasters, however, might be described as “too much coordination” disasters. In these scenarios, a powerful world government (or equivalent global coalition) actively prevents life from expanding visibly into the universe. And it continues to do so for as long as life survives. This government might actively prevent the development of technology that would allow such a visible expansion, or it might allow such technology but prevent its application to expansion.

For example, a world government limited to our star system might fear becoming eclipsed by interstellar colonists. It might fear that colonists would travel so far away as to escape the control of our local world government, and then they might collectively grow to become more powerful than the world government around our star.

Yes, this is not a terribly likely scenario, and it does seem hard to imagine such a lockdown lasting for as long as does advanced civilization capable of traveling to other stars. But then scenarios where all life on Earth gets killed off also seem pretty unlikely. It isn’t at all obvious to me that the too little coordination disasters are more likely than the too much coordination disasters.

And so I conclude that I should be in-the-ballpark-of similarly worried about both categories of disaster scenarios. Future filters could result from either too little or too much coordination. To prevent future filters, I don’t know if it is better to have more or less world government.

GD Star Rating
loading...
Tagged as: , ,

If Post Filter, We Are Alone

Me four years ago:

Imagine that over the entire past and future history of our galaxy, human-level life would be expected to arise spontaneously on about one hundred planets. At least it would if those planets were not disturbed by outsiders. Imagine also that, once life on a planet reaches a human level, it is likely to quickly (e.g., within a million years) expand to permanently colonize the galaxy. And imagine life rarely crosses between galaxies. In this case we should expect Earth to be one of the first few habitable planets created, since otherwise Earth would likely have already been colonized by outsiders. In fact, we should expect Earth to sit near the one percentile rank in the galactic time distribution of habitable planets – only ~1% of such planets would form earlier. …

If we can calculate the actual time distribution of habitable planets in our galaxy, we can then use Earth’s percentile rank in that time distribution to estimate the number of would-produce-human-level-life planets in our galaxy! Or at least the number of such planets times the chance that such a planet quickly expands to colonize the galaxy. (more)

New results:

The Solar System formed after 80% of existing Earth-like planets (in both the Universe and the Milky Way), after 50% of existing giant planets in the Milky Way, and after 70% of existing giant planets in the Universe. Assuming that gas cooling and star formation continues, the Earth formed before 92% of similar planets that the Universe will form. This implies a < 8% chance that we are the only civilisation the Universe will ever have. (more; HT Brian Wang)

Bottom line: these new results offer little support for the scenario where we have a good chance of growing out into the universe and meeting other aliens before a billion of years have passed. Either we are very likely to die and not grow, or we are the only ones who could grow. While it is possible that adding more filters like gamma ray bursts could greatly change this analysis, that seems to require a remarkable coincidence of contrary effects to bring Earth back to being near the middle of the filtered distribution of planets. The simplest story seems right: if we have a chance to fill the universe, we are the only ones for a billion light years with that chance.

GD Star Rating
loading...
Tagged as:

Hope For A Lumpy Filter

The great filter is the sum total of all of the obstacles that stand in the way of a simple dead planet (or similar sized material) proceeding to give rise to a cosmologically visible civilization. As there are 280 stars in the observable universe, and 260 within a billion light years, a simple dead planet faces at least roughly 60 to 80 factors of two obstacles to birthing a visible civilization within 13 billion years. If there is panspermia, i.e., a spreading of life at some earlier stage, the other obstacles must be even larger by the panspermia life-spreading factor.

We know of a great many possible candidate filters, both in our past and in our future. The total filter could be smooth, i.e. spread out relatively evenly among all of these candidates, or it could be lumpy, i.e., concentrated in only one or a few of these candidates. It turns out that we should hope for the filter to be lumpy.

For example, imagine that there are 15 plausible filter candidates, 10 in our past and 5 in our future. If the filter is maximally smooth, then given 60 total factors of two, each candidate would have four factors of two, leaving twenty in our future, for a net chance for us now of making it through the rest of the filter of only one in a million. On the other hand, if the filter is maximally lumpy, and all concentrated in only one random candidate, then we have a 2/3 chance of facing no filter at all in our future. Thus a lumpy filter gives us a much better chance of making it.

For “try-try” filters, a system can keep trying over and over until it succeeds. If a set of try-try steps must all succeed within the window of life on Earth, then the actual times to complete each step must be drawn from the same distribution, and so take similar times. The time remaining after the last step must also be drawn from a similar distribution.

A year ago I reported on a new study estimating that 1.75 to 3.25 billion years remains for life on Earth. This is a long time, and implies that there can’t be many prior try-try filter steps within the history of life on Earth. Only one or two, and none in the last half billion years. This suggests that the try-try part of the great filter is relatively lumpy, at least for the parts that have and will take place on Earth. Which according to the analysis above is good news.

Of course there can be other kinds of filter steps. For example, perhaps life has to hit on the right sort of genetic code right from the start; if life hits on the wrong code, life using that code will entrench itself too strongly to let the right sort of life take over. These sort of filter steps need not be roughly evenly distributed in time, and so timing data doesn’t say much about how lumpy or uniform are those steps.

It is nice to have some good news. Though I should also remind you of the bad news that anthropic analysis suggests that selection effects make future filters more likely than you would have otherwise thought.

GD Star Rating
loading...
Tagged as: ,

Future Filter Fatalism

One of the more colorful vignettes in philosophy is Gibbard and Harper’s “Death in Damascus” case:

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.

‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

That is, Death’s foresight takes into account any reactions to Death’s activities.

Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.

GD Star Rating
loading...
Tagged as: , ,