Category Archives: Physics

Past Hypothesis Taxonomy

Let me try an experiment: using a blog post to develop a taxonomy.  Here I'll try to develop a list/taxonomy of (at least semi-coherent) answers to the question I posed yesterday: why is it harder to formally predict pasts, versus futures (from presents)? Mostly these are explanations of the "past hypothesis", but I'm trying to stay open-minded toward a wide range of explanations.

I'll start with a list of answers, and then add more and group them as I read comments, think, etc.  I'll feel free to edit the post from here on:

  • Extremely unlikely:
    • Reality isn't different; we just ask different future vs. past questions.
    • An outside "God" intervened to make our past different.
    • We live after a big local ebb (i.e., fluctuation) in matter.
  • Rather unlikely:
    • Quantum measurement has a local time asymmetry that makes big effects.
    • A weak local time asymmetry in matter accumulates to big effects.
    • A past ebb in spacetime shape (e.g., inflation) forced a big matter ebb.
    • All spacetime boundaries satisfy a law-like "low entropy" condition.
  • Unlikely:
    • Our expanding cosmos violates one-to-one state mappings across time.
    • Past and future have different spacetime law-like boundary conditions.
GD Star Rating
Tagged as:

Scandalous Heat

Mea Culpa: I was wrong; Eliezer was wrong; Sean Carroll was right.  

Thermodynamics is the study of heat, temperature, pressure, and related phenomena. Physicists have long considered it the physics area least likely to be overturned by future discoveries, in part because they understand it so well via "statistical mechanics."  Alas, not only are we far from understanding thermodynamics, the situation is much worse than most everyone (including me until now) admits!  In this post I'll try to make this scandal clear.

For an analogy, consider the intelligent design question: did "God" or a "random" process cause life to appear?  To compute Bayesian probabilities here, we must multiply the prior chance of each option by the likelihood of life appearing given that option, and then renormalize.  So all else equal, the less likely that life would arise randomly, the more likely God caused life. 

Imagine that while considering ways life might arise randomly, we had trouble finding any scenario wherein a universe (not just a local region) randomly produced life with substantial probability.  Then imagine someone proposed this solution: a new law of nature saying "life was sure to appear even though it seems unlikely."  Would this solve the problem?  Not in my book.

We are now in pretty much in this same situation "explaining" half of thermodynamics.  What we have now are standard distributions (i.e., probability measures) over possible states of physical systems, distributions which do very well at predicting future system states.  That is, if we condition these distributions on what we know about current system states, and then apply local physics dynamics to system states, we get excellent predictions about future states.  We predict heat flows, temperatures, pressures, fluctuations, engines, refineries, etc., all with great precision.  This seems a spectacular success.

BUT, this same approach seems spectacularly wrong when applied to predicting past states of physical systems.  It gets wrong heat flows, temperatures, and pretty much everything; not just a little, but a lot wrong.  For example, we might think we know about the past via our memories and records, but this standard approach says our records are far more likely to result from random fluctuations than to actually be records of what we think they are.

Continue reading "Scandalous Heat" »

GD Star Rating
Tagged as:

Physicists Held To Lower Standard

In August I complained about vague LHC forecasts.  My oped based on that post just appeared in Symmetry, "A joint Fermilab/SLAC publication."  Its blurb: "Today’s LHC forecasts are no easier to score than the typical horoscope."  It ends:

But geez – the LHC costs more than $10 billion of public money. Shouldn’t we expect big-shot physicists who hope to crow to the public about LHC vindication to express their predictions in a more scoreable form? We don’t accept less from weather, business, or sport forecasters; why accept less from physicists?

My implicit answer: we hold physicists to lower standards.  As I posted two years ago:

Consider how differently the public treats physics and economics.   Physicists can say that this week they think the universe has eleven dimensions, three of which are purple, and two of which are twisted clockwise, and reporters will quote them unskeptically, saying "Isn’t that cool!"   But if economists say, as they have for centuries, that a minimum wage raises unemployment, reporters treat them skeptically and feel they need to find a contrary quote to "balance" their story.

That same Symmetry issue says:

Leon Lederman, a 1988 Nobel laureate and Fermilab physicist, plopped a folding table and two chairs on a busy New York City street corner and sat under colorful hand-scrawled signs offering to answer physics questions.  Even in a city of people too busy for impromptu sidewalk conversations, the sight was too tempting to resist. … Soon about 20 people formed a line down the block. They asked Lederman about the strong force, time and space, fusion, and even time travel. Some asked follow-up questions to get a clearer understanding, while others just seemed thrilled at the chance to meet a Nobel Prize winner.

I’ll bet none told Lederman he was wrong.  Imagine how a Nobel-winning economist would be received. 

GD Star Rating
Tagged as:

Self-Indication Solves Time-Asymmetry

This seems a deep insight simple enough to explain in a blog post (and so I’m probably not the first to see it):  the self-indication approach to indexical uncertainty solves the time-asymmetry question in physics!  To explain this, I must first explain time-asymmetry and indexical uncertainty.

A deep question in physics is time asymmetry – why doesn’t stuff happen as often "backwards" in time?  We have no idea about the tiny CP-violation in particle physics, but all the other time asymmetries are thought to arise from a very-low early-universe entropy.  The most popular explanation for this is inflation, especially eternal inflation, which says that any small space-time region satisfying certain conditions is connected to infinitely many large time-asymmetric regions much like what we see around us.  Alas, the chance that any small region satisfies these inflation conditions is extremely small.  As a recent paper puts it:

Initial conditions which give the big bang a thermodynamic arrow of time must necessarily be low entropy and therefore "rare." There is no way the initial conditions can be typical, or there would be no arrow of time, and this fact must apply to inflation and prevent it from representing "completely generic" initial conditions.  … If you can regard the big bang as a fluctuation in a larger system it must be an exceedingly rare one to account for the observed thermodynamic arrow of time.

So the question of time-asymmetry reduces to this: why does the universe have enough independently variable small regions that at least one of them gives eternal inflation?  That is: why is the universe so big?

Continue reading "Self-Indication Solves Time-Asymmetry" »

GD Star Rating
Tagged as: ,

Anthropic Breakthrough

If the universe is extremely large, with effective physics and cosmological conditions varying widely from place to place, how can we predict the conditions we should expect to see?  In principle we can use anthropic reasoning, by expecting to see conditions that give rise to observers, and perhaps expecting more conditions that give more observers.  But how can we apply this theory when we know so little about the sorts of conditions that produce observers?

Two recent papers suggest a simple but powerful solution:

  • “Predicting the cosmological constant from the causal entropic principle” (Phys Rev 8/07, ungated here)
  • “Predictions of the causal entropic principle for environmental conditions of the universe” (Phys Rev 3/08, ungated here)

This causal entropic principle so far successfully predicts dark energy strength, matter fluctuation ratio, baryonic to dark matter ratio, and baryonic to photon matter ratio!  I’m struggling to understand it though.

A simple reading of the principle is that since observers need entropy gains to function physically, we can estimate the probability that any small spacetime volume contains an observer to be proportional to the entropy gain in that volume.  Note:

Continue reading "Anthropic Breakthrough" »

GD Star Rating
Tagged as: ,

Quantum Orthodoxy

Regarding Eliezer’s parable last night, I commented:

I am deeply honored to have my suggestion illustrated with such an eloquent parable. In fairness, I guess I should try to post some quotes from the now dominant opposing view on this.

Last week I wrote:

Physicists mostly punt to philosophers, who use flimsy excuses to declare meaningless the use of specific quantum models to calculate the number of worlds that see particular experimental results.  …  Two recent workshops here and here, my stuff here.

Those workshops and most recent work has been dominated by Oxford’s Saunders and Wallace.  My promised quotes start with this their most recent published statement:

A potential rival probability measure, which actually leads to severe problems with diachronic consistency – to take the worlds produced on branching to be equiprobable – is revealed as a will o’ the wisp, relying on numbers that aren’t even approximately defined by dynamical considerations (they are rather defined by the number of kinds of outcome, oblivious to the number of outcomes of each kind). This point has been made a number of times in the literature (see e.g. Saunders [1998], Wallace [2003]), although it is often ignored or forgotten. Thus Lewis [2004] …  and Putnam [2005] … made much of this supposed alternative to branch weights in quantifying probability. (See Saunders [2005], Wallace [2007] for recent and detailed criticisms on this putative probability measure.)

The most detailed discussion I can find is Wallace 2005:
Continue reading "Quantum Orthodoxy" »

GD Star Rating
Tagged as: , ,