Anthropic Breakthrough

If the universe is extremely large, with effective physics and cosmological conditions varying widely from place to place, how can we predict the conditions we should expect to see?  In principle we can use anthropic reasoning, by expecting to see conditions that give rise to observers, and perhaps expecting more conditions that give more observers.  But how can we apply this theory when we know so little about the sorts of conditions that produce observers?

Two recent papers suggest a simple but powerful solution:

  • “Predicting the cosmological constant from the causal entropic principle” (Phys Rev 8/07, ungated here)
  • “Predictions of the causal entropic principle for environmental conditions of the universe” (Phys Rev 3/08, ungated here)

This causal entropic principle so far successfully predicts dark energy strength, matter fluctuation ratio, baryonic to dark matter ratio, and baryonic to photon matter ratio!  I’m struggling to understand it though.

A simple reading of the principle is that since observers need entropy gains to function physically, we can estimate the probability that any small spacetime volume contains an observer to be proportional to the entropy gain in that volume.  Note:

  • They explicitly exclude entropy of cosmic and black holes horizons.
  • They implicitly ignore future (e.g. Year Million) observers getting far more efficient and aggressive in using entropy.
  • They estimate that, aside from decaying dark matter, near us most entropy is made by starlight hitting dust, and most of that is in the past.

If decaying dark matter can support observers, then the principle predicts such decay will either make much less entropy than dust, or that its entropy gains will be distributed similarly to dust.

The intuition behind excluding cosmic and black hole entropy seems to be that those entropy gains don’t tend to create observers from scratch, which seems plausible to me.  Can we make sense of this and ignoring future observer efficiency if we see this whole approach as just predicting what early observers like us will see?

My doubt about the simple reading of the principle comes because its authors express it in terms of “causally connected regions.”  To find a causally connected region of spacetime, you start with a possible particle path through spacetime and then you collect all spacetime points which can get a causal influence both from and to somewhere on that line.  The principle seems to say the observer weight for a large volume is proportional to the entropy made within the largest causally connected region  corresponding somehow to that volume.

If the simple reading is wrong, then the principle seems to me to say not just that observers run on entropy gains, but that creating observers from scratch requires a large spacetime region of entropic processes with causal influences going back and forth over the whole region.  If so, this suggests that Earth life came neither from near Sol nor from before the last inflation — any seeding from before had a low probability of success.

Any physicists out there care to clarify all this?

Added: I wonder if this can be thought of as a world count prior, having nothing to do with observers?

Added 23May: Apparently half of all starlight is blocked by dust.

Added 24 May: Scott Aaronson weighs in.

Added 3Mar 2010: A big new advance!

GD Star Rating
Tagged as: ,
Trackback URL:
  • mitchell porter

    I have to agree that their rationale is logical only if the entropy generated by “starlight hitting dust” is the majority of entropy gain that is ever relevant to the existence of observers. Which – if you approach it in a Year Million/Billion/Quadrillion spirit – is indeed a claim about the fate of intelligence in the universe.

    First question: does Earth count as “dust”?

    I don’t think there’s any intention to talk especially about early observers or the conditions of creation of observers (I haven’t found any such suggestion in the papers). I think they just dreamed up this new way of weighting vacua and kept making assumptions until they could get numbers.

    I will try to extract the qualitative steps which go into making one of their predictions, so we can see the argument shorn of technical calculations.

  • mitchell porter

    The intuitive essence of the cosmological-constant prediction can be found in Figure 9 of the first paper. The middle diamond has the most entropy-gain-through-dust-heating, so that’s the favored value. But how to turn that into a strictly deductive argument, so we can see all the assumptions?

  • Mitchell, sure Earth is dust, and yes the authors don’t talk about early observers – I’m just wondering if that makes better sense of their calculations.

  • Observers are not currently found where entropy production is highest. Entropy production is highest inside stars and other massive objects. Sure, entropy is also produced when starlight hits matter – but that matter is spread out over a huge area – and is mostly inert and lifeless.

    As far as we know (from our single known instance of them), observers may be found on energy gradients in the vicinity of stars – but that observation is hardly new.

  • Recovering irrationalist

    This seems odd. Wouldn’t at least early observers be more likely to spawn within a range of entropy gain/volume? Not far too low, not far too high?

    Surely that matches what we tend to observe regarding known life and physical quantities (heat, pressure, concentration of whatever substance) and remove the need for theory-complexifying specific exceptions for objects with huge entropy gain/volume like black holes.

  • eddie

    Observers are not currently found where entropy production is highest [namely] inside stars and other massive objects.

    They aren’t? How could we know? Might not minds be constantly coming into existence within stars (albeit with very short existences)? By what rationale does anthropic reasoning count as “observers” all the humans ever living or ever to live, but not fluctuations in the stellar gases?

  • Tim and Recovering, the principle calculates entropy production over a very large volume, and so is not sensitive to detailed density variations or local derivatives.

  • Nick Tarleton

    By what rationale does anthropic reasoning count as “observers” all the humans ever living or ever to live, but not fluctuations in the stellar gases?

    Our observations are highly ordered; a maximum-entropy probability distribution over observations would very strongly favor chaotic ones; so our observations are not selected from a maximum-entropy distribution. The observations of observers created momentarily by random fluctuations would follow something close to a maximum-entropy distribution; so such observers contribute only a tiny amount to the total measure of observations, so the overall distribution is still very far from maximum entropy.

    Or so the argument goes, but I can’t think of a good, rigorous, a priori reason not to count chaotic observers formed by stellar gas fluctuations or whatever (though David Chalmers has made a promising start). I’m very confused by this.

  • Caledonian

    Our observations are highly ordered; a maximum-entropy probability distribution over observations would very strongly favor chaotic ones; so our observations are not selected from a maximum-entropy distribution.

    You’re confusing observers and observations. Chaotic observers born in the hearts of stars do not necessarily observe their surroundings.

  • Nick Tarleton

    Chaotic observers born in the hearts of stars do not necessarily observe their surroundings.

    Indeed they don’t. Their observations are falsidical, random… and, so far as I can tell, maximum-entropy.

  • Caledonian

    Borges’ Library

    ‘As far as you can tell’ doesn’t go far enough. You’re confusing the system that implements the observers with what they’re observing – their experiences are necessarily low-entropy. The Anthropic Principle dicates the sorts of worlds that the observers will perceive themselves to be in.

    Your inability to perceive order in the chaos is akin to complaining that most of the Library is nonsense static and ignoring the fact that it contains the Grand Unified Theory.

  • Russell Wallace

    Oh, now this is interesting! Here’s my interpretation after reading the first paper:

    Prior distribution of cosmological constant is linear random. So observed value of 1e-123 is exponentially unlikely, expected value is near 1.
    But values higher than 1e-120 disrupt galaxy formation, hence unobservable.
    This leaves 3 orders of magnitude unaccounted for, because intelligent life could almost as easily appear with 1e-120 as 1e-123. The galaxies would rapidly vanish from each other’s sight, but each would still evolve life.

    Specifically, suppose possible values are weighted by number of observers per baryon (as previous authors have done). That weighting doesn’t care to reduce cosmological constant much past 1e-120, on the assumption that intelligent life evolves in every galaxy, so it doesn’t matter if the galaxies separate rapidly.

    The above authors reproduce the observed value 1e-123 by weighting by number of observers per causal diamond (roughly speaking, per Hubble volume). In other words, their weighting cares to reduce it to 1e-123 on the assumption that it’s important that the galaxies stay in contact with each other at least until a decent fraction of their energy reserves have been used up.

    This makes sense (my interpretation, the authors talked only in terms of entropy production by dust heated by starlight) if we assume most galaxies don’t evolve intelligent life. Therefore the actual number of observers per whatever divisor you choose, will be determined by the number of galaxies to which observers, once evolved, can spread. A typical observer will, therefore, find himself in a universe where the cosmological constant was small enough for intelligent life evolving in one galaxy to spread to many others before they disappeared from sight.

    Therefore (with obvious disclaimer about extreme tentativeness of such conclusions) the above findings arguably constitute evidence that intelligent life is rare.

  • Russell Wallace

    Final afterthought for tonight:

    Why would the above reasoning not push, or at least float, the cosmological constant even lower than the observed value? The authors’ reasoning is that additional galaxies reachable in later time are mostly “used up”, so don’t provide much additional weighting. However, it’s been suggested that burnt-out stars retain most of their energy value to an advanced civilization (e.g. feeding helium etc into black holes for energy).

    Suppose intelligent life is rare, but not that rare, and over distances substantially larger than our horizon, expanding bubbles of civilization are likely to bump into each other, thereby negating further weight bias.

    By that interpretation – with additional disclaimer, obviously this is even more extremely tentative than my earlier comment – the above papers just might constitute the very first evidence for a lower bound on the frequency of intelligent life.

  • God has chosen the world that is the most perfect, that is to say, the one that is at the same time the simplest in hypotheses and the richest in phenomena.
    — Gottfried Von Liebniz

  • Russell, your concept would suggest the chance to find observers at any one place would be the integrated entropy production in the back light cone from that place. While plausible, this would seem to make different predictions from the causal diamond used in the causal entropic principle.

  • Russell Wallace

    Robin – I don’t think I understand your reasoning. The causal entropic principle says a larger causal diamond gives higher weight because it contains more dust-produced entropy. I’m suggesting it gives higher weight because it increases the total number of observers resulting from each origin of intelligent life. I think the two give the same predictions at least for a causal diamond no larger than the one we observe, i.e. cosmological constant as low as 1e-123. Am I missing something?

  • Russell, cones and diamonds are different shapes, so integrals over them could be different.

  • Russell Wallace

    The causal diamond is the amount of stuff in the future light cone (taking into account the eventual disappearance of things over the horizon of accelerating expansion). But even if it weren’t, the difference only matters to a small constant factor, which disappears in the noise in this context. What’s important here is the big-O size, and that’s the same for cones and diamonds.

  • I just added to this post.

  • Pingback: Overcoming Bias : Aliens Not So Strange()

  • Pingback: Overcoming Bias : Seek Criticism()