Tristan Cook has posted an impressive analysis, “Replicating and extending the grabby aliens model.” We are grateful for his detailed and careful work. Cook’s main focus is on indexical inference, showing how various estimates depend on different approaches to indexical analysis. But he has an appendix, ‘Updating n on the time remaining”, wherein he elaborates a claim that some of our analysis is “problematic”, “a possible error”, is “incompatible” with his, and that
“These results fail to replicate Hanson et al.’s (2021) finding that (the implicit use of) SSA implies the existence of GCs in our future.”
In this post I respond to that critique.
Cook quotes our claim:
If life on Earth had to achieve n “hard steps” to reach humanity’s level, then the chance of this event rose as time to the n-th power. Integrating this over habitable star formation and planet lifetime distributions predicts >99% of advanced life appears after today, unless n < 3 and max planet duration <50Gyr. That is, we seem early.
He also replicates this key diagram of ours:
This shows that humanity looks very early unless we have a low value of either n (the number of hard steps needed to create advanced life like us) or Lmax (max habitable planet duration). We suggest that this be explained via a grabby-aliens-fill-the-universe deadline coming soon, though we admit that either very low n or very low Lmax are other possible explanations.
But Cook claims instead that “large n and large Lmax … are incompatible.” Why? He offers a simple Bayesian model with a uniform prior over n, equal numbers of two types of planets all born at the same time with lifetimes of 5 and 100 billion years, and updating on the fact that humans appeared on one of these planets after 4.5 billion years. He shows (correctly) that the Bayesian posterior then overwhelmingly favors n=1, with almost no weight on n>2.
But this seems to me to just repeat our point above, that without a grabby aliens deadline one needs to assume either low n or low Lmax. If you allow large Lmax with no deadline, that will force you to conclude low n; no surprise. (Also, it seems to me that all of Cook’s n estimates do not update on all of the varied evidence that has led other authors to estimate higher n.)
The body of Cook’s paper describes a much more elaborate Bayesian model, a model which includes the deadline effect. And the posteriors on Lmax there also very strongly favor low Lmax, for all the indexical reasoning cases that he considers. Does this show that Lmax is “incompatible” with large n?
No, because this result is easily attributed to the fact that his prior on Lmax strongly favors both low n and low Lmax. Cook considers three priors on n, with medians of 0,1,3. And while he allows Lmax to range from 5 to 20,000 Gyr, the median of his prior is ~10 Gyr. Even though actual median planet lifetime is 5,000 Gyr. An analysis that won’t allow large Lmax or large n can’t tell us is those two are compatible.
Note that the priors in Cook’s main Bayesian analysis are not designed to express great ignorance, but instead designed to agree with estimates from several prior papers that Cook likes. So Cook’s main priors exclude the possibilities that grabby alien civs might expand slowly, or that there are a great many non grabby civs for each grabby one. And he tunes his prior to ensure a median of exactly one intelligent civilization per observable universe volume.
However, in another appendix of Cook’s paper, “Varying the prior on Lmax”, he also considers a wider prior on Lmax. (He retains all his other prior choices, including a prior on n with median 1.) Namely a lognormal with a median of 500 Gyr and a one sigma range of 110 to 2200 Gyr. His posterior from this has a median Lmax of 7Gyr, and a 90th percentile at ~100 Gyr. Which means that compared to Cook’s prior on Lmax, his posterior has substantially lower values of Lmax. Does this prove his claim that high Lmax is incompatible with high n?
I think not, because 60% of this posterior is on cases with less than one grabby civ per observable universe volume, and it takes a much higher density of such civs to create a grabby aliens deadline effect.
Look, the fact that we now find ourselves on a planet that has only lasted for 4.5Gyr should boost low Lmax hypotheses in two ways. The first, and weaker effect, is that the lower is Lmax, the fewer planets there are below Lmax, and thus the higher becomes the prior on our particular planet. This is a count effect, which boosts our planet’s posterior by a factor of ten for every factor of one hundred by which Lmax falls. As the total dynamic range of Lmax under consideration here is a factor of 4000, that’s a real but modest effect.
The second effect is much larger. Without a grabby aliens deadline effect, then for n=1 a planet that lasts for 4000 times longer becomes 4000 times more likely to birth an intelligent civilization. For n=2, it becomes eight million times more likely. And this factor gets even bigger for larger n. Thus observing that we appear on a planet that has lasted only 4.5Gyr can force a huge additional update toward lower Lmax. Without a deadline, that’s the only way to explain how we appear on such a short lived planet if there is no grabby aliens deadline. This strong effect plausibly explains the strong Lmax updating effects we see in Cook’s wider Lmax prior analysis, as most of the posterior weight there is on scenarios with no deadline effect.
Bottom line: I happily admit there is a count effect that prefers lower Lmax in a posterior compared to a prior. But this effect is weak; a factor of ten in posterior per factor of one hundred in Lmax. This effect happens regardless of whether a grabby aliens deadline effect applies. But the other much stronger Lmax update effect is cancelled by a grabby aliens deadline. Yes, if aliens are so rare that there’s no deadline effect, the update toward low Lmax seems to be strong. But there is an important sense in which such a deadline is an alternate explanation to human earliness. This is what we claimed in our paper, and I don’t see that Cook’s analysis changes this conclusion.
P.S. Cook doesn’t actually simulate a stochastic model where alien civs arise then block each other. He instead uses a simple formula following “following Olson (2015).” So his distributions over civ size only include variance over time, but not other kinds of variance. I worry that this formula assumes an independence of alien volume locations that isn’t true. Though I doubt the errors from this simplification make that big of a difference.
I don't know that hard steps n has been very well defined. Preconditions like the oxygen absorbing into free elements, and allowing oxygen levels to rise, or the crust not being molten seem to constrain various hard step intervals. Admittedly our uncertainty is large enough that you could argue for dozens of hard steps, but the other end of estimates also seem to put it at 1 or none.
Thanks for your response Robin! I have written a reply here