Scandalous Heat

Mea Culpa: I was wrong; Eliezer was wrong; Sean Carroll was right.  

Thermodynamics is the study of heat, temperature, pressure, and related phenomena. Physicists have long considered it the physics area least likely to be overturned by future discoveries, in part because they understand it so well via "statistical mechanics."  Alas, not only are we far from understanding thermodynamics, the situation is much worse than most everyone (including me until now) admits!  In this post I'll try to make this scandal clear.

For an analogy, consider the intelligent design question: did "God" or a "random" process cause life to appear?  To compute Bayesian probabilities here, we must multiply the prior chance of each option by the likelihood of life appearing given that option, and then renormalize.  So all else equal, the less likely that life would arise randomly, the more likely God caused life. 

Imagine that while considering ways life might arise randomly, we had trouble finding any scenario wherein a universe (not just a local region) randomly produced life with substantial probability.  Then imagine someone proposed this solution: a new law of nature saying "life was sure to appear even though it seems unlikely."  Would this solve the problem?  Not in my book.

We are now in pretty much in this same situation "explaining" half of thermodynamics.  What we have now are standard distributions (i.e., probability measures) over possible states of physical systems, distributions which do very well at predicting future system states.  That is, if we condition these distributions on what we know about current system states, and then apply local physics dynamics to system states, we get excellent predictions about future states.  We predict heat flows, temperatures, pressures, fluctuations, engines, refineries, etc., all with great precision.  This seems a spectacular success.

BUT, this same approach seems spectacularly wrong when applied to predicting past states of physical systems.  It gets wrong heat flows, temperatures, and pretty much everything; not just a little, but a lot wrong.  For example, we might think we know about the past via our memories and records, but this standard approach says our records are far more likely to result from random fluctuations than to actually be records of what we think they are.

Physicists' and philosophers' standard response to this problem is much like invoking a "life appeared even though it seems unlikely" law: they invoke a "past hypothesis" saying the universe long ago had "very low entropy."  Now when we clump physical states into "coarse" states roughly corresponding to what one might know about a system via crude observations, the "entropy" of each coarse state is the log of the weight that a standard distribution gives to that clump.  So saying that early systems have "very low entropy" just says that the distributions we would usually use to successfully predict their futures are completely, totally, and almost maximally WRONG for predicting their pasts!

So we "resolve" the massive mistakes standard distributions make when applied to the past by adding a "law" saying basically, "what works well predicting the future makes near maximal mistakes when predicting the distant past."  Which seems to me to basically say that this anomaly is about as bad as it could possibly be.  Worse, this "past hypothesis" is ambiguous in several ways: it doesn't say exactly to what "early" space-times it applies, nor just how "near" maximally wrong standard distributions will be there, nor which of the many very wrong distributions apply. 

Furthermore, it is not even clear that such a hypothesis achieves its intended end.  We have many detailed formal calculations predicting future states given standard assumptions, but I'm aware of no formal calculations predicting past states given the usual machinery plus such a distant past hypothesis. And I've seen no calculations formally evaluating this hypothesis relative to other hypotheses.  So the "past hypothesis" seems more of a vague hope than a reliable inference technique. 

Only a tiny handful of physicists (and philosophers) are trying to explain this past hypothesis, i.e., looking for concrete assumptions that might imply it.  And reviewing their efforts over the last few days I have to report: no one is even remotely close.  Yet thermodynamics is usually taught as if this problem doesn't exist.  Scandalous!

So Eliezer was wrong to say "The Second Law of Thermodynamics is a corollary of Liouville's Theorem"; the second law makes predictions about both future and past, and while future predictions are corollaries, past ones are not.  And I was wrong to suggest that "at least one inflation origin … implies at least one (and perhaps infinitely many) large regions of time-asymmetry like what we see around us."  Sean Carroll was correct to respond:

If you focus on patches of universe like ours today (big, low density, etc), standard counting would suggest that only an infinitesimal fraction of them came from low-entropy inflationary beginnings.

In technical terms, I was fooled by the low dimensional state spaces commonly used to model inflation; unless strange physics fails to map states one-to-one across time, an initial inflation bubble must have as many possible states as any vast universe of eternally expanding bubbles that might follow from it.

For more, see this review article, this book, and this post and presentation by Carroll. 

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • mitchell porter

    Luboš Motl has often written about this, in the context of chiding or attacking Sean for having an interest in “Boltzmann brains”, e.g. here and here. His attitude seems to be that the past having a lower entropy than the future is the reason why the second law works, that the initial conditions will probably be explained by string theory eventually, but meanwhile we shouldn’t be worried about saying that they were indeed low-entropy.

  • http://www.ciphergoth.org/ Paul Crowley

    I’m not grasping something here. The probability of us observing our relatively low-entropy world today, given a high-entropy past, are very low indeed. The probability of these observations given a low-entropy past is vastly higher. The Second Law says nothing about what our priors for these different possibilities should be, but if your prior is Somolonoff-like, then a low-entropy past is a reasonable possibility. If you’re making an argument that it should be reasonable to “predict the past” by running the Second Law backwards, I’m not seeing it; empirically we know that doesn’t work, perhaps you could set out the argument why it should in more detail if you’re making it?

  • prase

    Are you arguing that since in a closed system entropy is conserved due to Liouville’s theorem, it cannot rise, and that it’s scandalous when in standard rationalisation of statistical physics this apparent contradiction with the entropy-maximisation principle is not enough emphasised?

  • http://profile.typepad.com/robinhanson Robin Hanson

    Mitchell, I can’t imagine why one would have much confidence that string theory will explain it.

    Paul, I saying that assuming a “low entropy past” is pretty much just assuming that your best model gets the past very wrong; it isn’t a concrete alternative worthy of consideration. It isn’t a solution; it is just a clever rewording of the fact that we don’t have a solution.

    prase, not even remotely close.

  • David

    I’m a new reader, so I’m not sure if you’re being hyperbolic for comic effect or if you’re being serious. I wonder how you’d rank prior beliefs as to which field will be “overturned” first: thermodynamics or cosmology. The use of scare quotes around statistical mechanics stands out. Thermodynamics can tell you what will happen to a system not in equilibrium. Once the system has equilibrated its unobserved history is lost. If you find a glass of room temperature water on the counter you can’t tell if it started out as ice or hot. If you find the water slightly cooler than room temperature you can rule out hot but cannot assume ice. Forgive me if this is too elementary but based on your post, I wonder. The Past Hypothesis seems to be nothing more than “unknown or unknowable initial conditions.”

  • http://profile.typepad.com/riemannzeta Michael F. Martin

    No worries. Second law is a mind bender fer sure.

    Listen, have you guys read this paper that purports to connect the Principle of Least Action with the Second Law?

    http://brokensymmetry.typepad.com/broken_symmetry/2009/03/natural-selection-for-least-action-by-kaila-and-annila.html

    Separately, I will say that I believe that the second law might have something to do with how dissipative transfers change with different local geometries because of gravity. Watch water boiling in low gravity to understand why.

    http://science.nasa.gov/headlines/y2001/ast07sep_2.htm

  • Mike

    Is it that “strange” if physics fails to map states one-to-one over time? Certainly this seems to be true in the absence of gravity, but this thinking hasn’t been made consistent with gravity. I personally find it very unattractive. Does it not imply that all the information about us was present at the beginning of the universe? To me that seems contrived.

    I prefer to think, without any formal theory to back it up (yet), that randomness is introduced, perhaps as wavelengths endlessly expand beyond the local horizon during eternal inflation. I would think this solves the problem, because it does not take spectacularly low entropy to initiate eternal inflation, and yet it quickly create a universe dominated by low-entropy regions.

    In fact you don’t even need randomness, if horizons and Hawking radiation allow for quantum state copying. Consider performing a quantum experiment while falling into a black hole with it, or alternatively observing the Hawking radiation from the black hole. If the horizon in some way produces two versions of the same experiment, then de Sitter horizons can endlessly copy low-entropy states during eternal inflation, while high-entropy states do not benefit from this.

    BTW, I thought that Carroll agreed any “problem” (other than initial conditions) is overcome if one simply assumes the early universe was not in thermal equilibrium — which to me seems very plausible. Whether low-entropy initial conditions are OK or not is at this point a question of aesthetics. The only attempt I know of to explain the creation of a universe — as half an instanton from “nothing” ala Vilenkin — though admittedly extremely crude, speculative, and informal, does seem to provide low-entropy initial conditions.

  • Mike

    Perhaps I misunderstand the tone of this blog. My last post did not intend to say that these questions have been answered, just that plausible outlines of parts of answers are out there, and physicists are working to understand this better. If Robin’s point is to admit that while he thought all these questions had clear answers, they don’t, then I must agree — there are many questions that remain (and they are the deepest). My personal take is this pertains more to the creation of the universe and combining quantum theory and gravity, and less to what is traditionally called “thermodynamics.”

  • prase

    “prase, not even remotely close”

    So the point is that it is somehow unreasonable to suppose non-maximum entropy initial state, or something else? Can you be more technical about it? It is difficult for me to figure out what is the main statement (having read David’s comment, I don’t seem to be alone).

    “What we have now are standard distributions (i.e., probability measures) over possible states of physical systems, distributions which do very well at predicting future system states. That is, if we condition these distributions on what we know about current system states, and then apply local physics dynamics to system states, we get excellent predictions about future states. We predict heat flows, temperatures, pressures, fluctuations, engines, refineries, etc., all with great precision.”

    By standard distributions you mean (grand-/micro-)canonical ensembles? By system states you mean microscopic or macroscopic states? Since on micro-level all stuff is time-reversion invariant, it seems that you mean the latter, but speaking about local dynamics associates rather with the former. I am even not sure whether these details would help me to understand.

  • http://profile.typepad.com/robinhanson Robin Hanson

    David I’m not suggesting thermodynamics will be overturned. Our usual approach works well for predicting the future even though we have “unknown or unknowable final conditions.” That it works terribly for the past says the problem is more than “unknown or unknowable initial conditions.”

    Mike, non one-to-one mapping is a vague hope, to toss on the large pile of other vague hopes that have not yet made much headway. “Not in thermal equilibrium” is another way to say “low entropy” which is another way to say “our usual approach fails terribly.”

    prase, “states” meant exact states, while “coarse states” described sets of such exact states.

  • http://profile.typepad.com/riemannzeta Michael F. Martin

    Bayesians will probably love Lubos Motl’s take on the Arrow of Time:

    http://motls.blogspot.com/2007/12/myths-about-arrow-of-time.html

  • David

    But the final conditions are not unknown. The final condition is thermal equilibrium. Thermodynamics is “failing terribly” at something it could never do. It would be easier to understand your critique if you took it out of the realm of origins of life or the universe. Preferably a non-relativistic, isolated system in which gravity or biology is not a big factor.

  • http://denisbider.blogspot.com denis bider

    But, somehow, everyone appears to be ignoring the elephant in the room: that unlikely events might very well have happened in the evolution of our universe because there was a will that wanted them to happen that way?

    (ducks)

  • Stuart Armstrong

    Penrose was trying to explain the past hypothesis – can’t say his explanation is very convincing, but it is there (Weyl flat begginings and ends to the universe – I actually understand that, and can explain it if people want). His explanation for the missing entropy is to do with black holes.

    But this isn’t really credible, without some non-time-revesible process tied to entropy in some way (just the weak force being non-time-reversible is probably not enough).

    Retreat to the anthropic principle, alas?

  • http://ynglingasaga.wordpress.com Rolf Andreassen

    You appear to be assuming time symmetry, which is known to be false. CP symmetry is broken, CPT symmetry holds under very broad assumptions, therefore T symmetry is broken. Therefore you should not expect similar results when time-reversing the laws.

  • http://timtyler.org/ Tim Tyler

    Rolf: the universe is probably T-symmetric. The idea that CPT symmetry implies T asymmetry ignores the possibility of charge and parity being cyclic phenomena, which reverse themselves *automatically* when you reverse time. I explain this point here: http://finitenature.com/cpt/

  • http://timtyler.org/ Tim Tyler

    Re: “no one is even remotely” – Robin loves a long shot!

    Physicists and cosmologists widely agree that there was a big bang. Robin may think he knows better – but I can’t see why he thinks that, looking at this post.

    Re: VILLE R. I. KAILA AND ARTO ANNILA – how you can write a paper like that and *not* mention Roderick Dewar is beyond me. Do the authors really not know?

  • michael vassar

    Course states are pretty clearly in the map, not in the territory.
    This post seems to me to be an assertion that it’s still not clear why the map isn’t primarily in Boltzman brains rather than being where it empirically primarily is.

  • http://caveatbettor.blogspot.com caveat bettor

    Robin, I think I just heard your strongest proof for the existence of God yet.

  • http://profile.typepad.com/robinhanson Robin Hanson

    Michael, Lubos Motl is seriously confused.

    David, there may well be types of particles which don’t interact enough as their density falls to ever come into equilibrium.

    denis, that is another vague hope that hasn’t been taken very far.

    Stuart, yes I recall Penrose’s flat past hypothesis; is it still considered viable?

    Rolf, CPT is enough to make this a huge puzzle/problem.

    Tim, I did not deny a big bang.

    Michael, the puzzle can be expressed without reference to coarse states.

  • http://profile.typepad.com/6p00e551b9414f8833 Psy-Kosh

    Robin: Well, the Liouville’s Theorem stuff “starting from a low entropy state, we should expect to see increasing entropy”, and we can then fully admit to saying “but we’re still confused on how the fluff we had a low entropy state in the first place to start with!”

    I’m unsure, but Barbour’s timeless model _may_ help with this by basically forcing there to be a special unique low entropy chunk of configuration space in one “corner”. That is, seems a natural consequence of it is that, so… And of course, in Barbour’s model, you don’t need to make any assumptions like “the low entropy bit was in the past”, because you’ve got rid of time in the first place. So, keeping causality while loosing time while having a natural low entropy zone of the configuration space…

  • http://ynglingasaga.wordpress.com Rolf Andreassen

    It occurs to me that nobody has actually done the experiment of T-reversing the whole universe; is it so obvious that it would lead to a low-entropy state? T-reversing requires, among other things, that you reverse the momentum of every particle, which requires you to know the momentum of every particle, which is not possible. Perhaps we should indeed predict a high-entropy state for the past, in the absence of other information. But we do have other information; is it really a good idea to apply a statistical law like 2ndT to a single case, the whole Universe? The sort of time-reversal that leads to a low-entropy state, in this view, is equivalent to adding a whole lot of otherwise unknown information to the system. Then it’s not surprising that you get an effect you don’t see in isolated systems.

  • http://cosmicvariance.com/ Sean Carroll

    Robin, thanks — sincere mea culpas are few and far between! Not that any sort of apology was really necessary.

    The actual situation is pretty straightforward: standard stat mech explains everything we see very well so long as you assume a past hypothesis, whose origin is clearly a matter for cosmology, not for stat mech itself.

    But two things are legitimately scandalous. First, most textbook/intro treatments of stat mech don’t explain the need for a past hypothesis, which is somewhat inexcusable. Second, research-level cosmologists don’t admit that explaining the low-entropy past should be very high on their list of duties.

    But we’re trying to change all that!

  • http://timtyler.org/ Tim Tyler

    You did describe the “past hypothesis” as a “vague hope”.

    The “past hypothesis” is the hypothesis of a low-entropy early state. The big bang also hypothesises a low-entropy early state. Are you splitting hairs between describing the “past hypothesis” as a “vague hope” and “denying the big bang”? That seems ridiculous – but how else can one interpret these comments?

  • Cyan

    Paul Crowley, you wrote,

    The probability of us observing our relatively low-entropy world today, given a high-entropy past, are very low indeed. The probability of these observations given a low-entropy past is vastly higher.

    In this post, Sean Carroll wrote,

    If all pasts consistent with our current macrostate are equally likely, there are many more in which the past was a chaotic mess, in which a vast conspiracy gave rise to our false impression that the past was orderly.

    So it seems that there is a direct contradiction here, at least for probability distributions uniform over microstates.

  • David

    Robin, pardon me, but now you’re postulating new types of particles? Isn’t it a truism that in the limit where particle interaction goes to zero thermodynamics breaks down?

  • http://timtyler.org/ Tim Tyler

    Re: But two things are legitimately scandalous. First, most textbook/intro treatments of stat mech don’t explain the need for a past hypothesis, which is somewhat inexcusable.

    That is usually regarded as a fact – not a hypothesis – because of the strength of the supporting evidence.

    Re: Second, research-level cosmologists don’t admit that explaining the low-entropy past should be very high on their list of duties.

    What caused the low-entropy state at the start of the observed universe is sometimes regarded as being outside the realm of scientific enquiry. However, there have been a number of speculative attempts at the problem – usually claiming that it was not, in fact, the beginning – e.g. the “Big Collision” theory.

    I think these are pretty well known, though not very high-priority. It is hard to prioritise highly ideas that are so speculative and difficult to test.

  • ScentOfViolets

    Oh God, not again. There is nothing new here, and most physicists in the field think Boltzmann Brains is fringe stuff, for very obvious and easy reasons:

    1)No one knows how to really count these states and sum them properly. In particular, no one really knows how to work gravity and entropy together yet, and no one really knows what it means to make a statement about entropy in our particular universe.

    2)No one knows exactly how contingent certain outcomes are upon others – the usual example is evolution.

    3)No one really knows what counts as an ‘observer’, or how to rate or quantify their perceptions.

    4)The priors are not clearly stated.

    So there is nothing ‘clear’ or ‘obvious’ about the argument that BB’s must vastly outnumber the regular kind. Quite the contrary.

    This is all bog-standard stuff, btw. BB’s may sound kinda-sorta cutting edge, but they aren’t, really, until the problems above are addressed a little bit better than they have been.

    Otherwise, it’s like arguing that the universe has to be three-dimensional for what are essentially anthropic reasons, only to discover that there are good dynamical reasons for three large spatial dimensions to be preferred. No one really wants to be making those sorts of mistakes.

  • ScentOfViolets

    What caused the low-entropy state at the start of the observed universe is sometimes regarded as being outside the realm of scientific enquiry. However, there have been a number of speculative attempts at the problem – usually claiming that it was not, in fact, the beginning – e.g. the “Big Collision” theory.

    I think these are pretty well known, though not very high-priority. It is hard to prioritise highly ideas that are so speculative and difficult to test.

    Posted by: Tim Tyler

    Right on all counts. Understand, no one really believes that the universe came to be because of a ‘statistical fluctuation’. But they don’t disbelieve it either. There’s simply no way to tell at this point. It could well turn out that there are excellent reasons to disbelieve a Boltzmann Beginning. But Boltzmann Brains aren’t one of them. At least not yet.

  • http://profile.typepad.com/robinhanson Robin Hanson

    Sean, the “past hypothesis” seems to me more of restatement of our failure than an actual solution. And I’d be more comfortable accepting it if I saw some formal calculations showing what it implied; the hand-waving makes me nervous.

    Scent, I said nothing about Boltzmann brains.

  • ScentOfViolets

    ?!? I’m not connecting up A to B here with this comment, Robin. I’m just reading what people are writing and clicking links. Like the post you linked to about BB’s.

  • salacious

    I’m confused. If we believe the second law of thermodynamics, that entropy is increasing, then of course it’s true for any non-equilibrium situation that our thermodynamic probability distributions will not “predict” the past accurately? Isn’t that exactly what we mean when we say dS/dt>0, that time evolution is (psuedo-)surjective with respect to macrostates? I don’t get why adding in a initial low entropy boundary condition is cheating–it doesn’t solve the problem of times arrow, but it lets us make thermodynamics work on our universe.

    Is it that our assumption that we started from a low entropy state is corrupted by the fact that the empirical data which justifies this assumption might be the product of a Boltzmann brain-esque situation, something like what Cyan quotes Carroll saying @ 5:09?

  • TonyF

    (Disclaimer: I have not read the old postings by Hansson and Yudkowsky that this post refers to, so I just hope that this small comment is not too much out of context.) Is there really a problem here? A basic assumption of statistical mechanics is that the universe started out in a low entropy state. With that assumption it gives predictions that agree with observations. Without it it does not. That the universe actually started out in a non-thermal equilibrium remains to be explained. There is nothing remarceble by that, there exist a lot of as-yet unsolved scientific questions, this one just happens to one of them. And I agree with Mike above that this is more an unsolved problem in cosmology, quantum gravity and alike rather than in statistical mechanics (or even less in classical thermodynamics). (Of course those divisions into different subjects are just conventional and not in nature itself, but practical in the discussion.) A rather popular (and, I think, clear) discussion related to this:
    http://www.usyd.edu.au/time/price/preprints/Price2.pdf

  • ScentOfViolets

    Is it that our assumption that we started from a low entropy state is corrupted by the fact that the empirical data which justifies this assumption might be the product of a Boltzmann brain-esque situation, something like what Cyan quotes Carroll saying @ 5:09?

    The argument, roughly, is that if the universe arose because the initial low entropy state was the result of a random fluctuation, the overwhelming probability is that we would see something else. That we don’t is taken to be an argument against the ‘random fluctuation’ hypothesis. The problem is that that the conditional is by long odds – according to some fairly high-power people – not true, or at least, not justified. In fact, no one really knows what the most probable universe that arose from a ‘quantum fluctuation’ would really look like because we don’t have a good way of calculating these sorts of probabilities. Not at this point, at least.

  • Doug S.

    Yes, yes, the existence of the universe has been demonstrated to be highly improbable. Now, I vote we worry about something else in the short time we have before we all vanish in a puff of logic.

  • http://profile.typepad.com/riemannzeta Michael F. Martin

    @Robin Hansen

    I agree that Motl is confused. But he’s interesting!

    What did you not like about his explanation? I have to admit that the Bayesian perspective is difficult for me. I came up as a frequentist studying quantum mechanics from Sakurai.

  • http://profile.typepad.com/riemannzeta Michael F. Martin

    Why are people so resistant to the idea that gravity might have something to do with the arrow of time? Come on, these are the two biggest pieces of the puzzle that aren’t fitted together.

    I’m kidding. Sort of.

  • mjgeddes

    Eliezer was wrong

    Needless to say Robin, I’m very pleased indeed. Let me just recap a few of my own comments from August 2008, for readers looking for a truly radical hypothesis:

    (Open Thread, Aug 2008)

    “There was insufficient discussion of Occam’s razor:
    (1) At the beginning of time the universe was in the simplest possible state (minimal entropy density). Why?”

    (Self-Indication Solves Time-Asymmetry, Aug 2008)

    “…the time asymmetry can only be explained by….universal terminal values…built into the structure of the universe….”

    “…Occam’s razor only works because for every knowledge domain there are associated *aesthetic principles*…”

    “….very closely associated with the creation of beauty”

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Given my current perspective on thermodynamics (which has changed somewhat since I last wrote) the low-entropy initial condition looks like it might be something of a red herring for explaining time’s arrow. Judea Pearl’s conditions for identifying a direction of causality in terms of the direction in which there are not unexplained correlations appearing, seems more fundamental than Liouville’s Theorem or the Second Law. You can view the Second Law as a consequence of Pearl’s direction of time plus Liouville’s Theorem, in the sense that, if Pearl’s rule against unexplained correlations appearing when moving in the direction regarded as “forward”, were to be repealed, then even given Liouville’s Theorem there’d be nothing wrong with expecting water to turn into ice cubes and steam. It would just be an unexplained correlation that appeared while moving forward in time. Similarly, you could have a game-of-Life universe in which there was no analogue of Liouville’s Theorem and multiple initial states could map onto the same end state, the equivalent of water turning naturally into ice cubes and electricity; but if this world obeyed Pearl’s laws governing the direction of the causality, you would still have records of the past and the perception of time moving in a particular direction.

  • http://timtyler.org/ Tim Tyler

    The proposed link between Occam’s razor and the low-entropy start conditions is probably meaningless.

    High thermodynamic entropy start conditions – looking similar to the heat death – could probably be specified extremely compactly. Similarly, a PRNG can produce an awful lot of what looks like noise with an extremely small internal state.

  • salacious

    ” a direction of causality in terms of the direction in which there are not unexplained correlations appearing”

    Eliezer, could you provide a link explaining this definition further? From where I stand, this definition is circular. “Unexplained” requires an account of causality to function properly, so you are smuggling in the term that you want to define.

  • http://profile.typepad.com/robinhanson Robin Hanson

    Eliezer, locally unexplained correlations sound to me a lot like what you’d get from a distant low-entropy hypothesis; I look forward to hearing where your thoughts end up when you are ready to share them.

  • http://www.virgilanti.com/journal/ Virge

    Eliezer, I’ve just been reading Gary Drescher’s Good and Real. His interpretation of an Arrow of Time arising within a time-symmetric system matches what you’ve described of Pearl’s (although Drescher builds the understanding using case studies of a simple 2-D colliding balls model, where it seems that Pearl takes a more statistical approach).

    Drescher talks about “…how the apparent arrow of time depends on the lack of coordination within the initial state, and depends on the developing correlations within subsequent states…”

  • http://changegrow.com James Andrix

    For example, we might think we know about the past via our memories and records, but this standard approach says our records are far more likely to result from random fluctuations than to actually be records of what we think they are.

    Even on rereading this makes no sense to me.

  • ScentOfViolets

    All he’s saying, James, is that according to the calculations of some people, it is more likely for the book you are reading to have condensed out of intergalactic space through completely random fluctuations than it is to be the result of some sort of evolutionary process starting far back in time, say, the coalescing of a protostellar cloud into our present-day Sun and planets.

    To say that these calculations are . . . questioned would be an understatement. The current consensus is that there is no way to make these sorts of calculations given the state of the art.

  • http://www.ciphergoth.org/ Paul Crowley

    @Cyan: there’s no contradiction between what I said and what Sean said. We have many more high-entropy predecessor states than low-entropy because there are so many more high-entropy states.

    As I say, if your prior is the Solomonoff prior, there’s a huge weighting in favour of the low-entropy predecessor state.

  • ScentOfViolets

    We have many more high-entropy predecessor states than low-entropy because there are so many more high-entropy states.

    This statement doesn’t seem to make any sense, on the face of it. Perhaps a specific example would help. I get the impression that there’s some confusion between what is commonly taught as thermodynamics, and to what system the laws of thermo are being applied to.

  • http://shagbark.livejournal.com Phil Goetz

    Couple of questions:

    • When you say thermodynamics is bad at projecting into the past, you’re only talking about the origin of the universe, right? You’re not saying we get the wrong results when talking about teakettles.
    • When you say “the universe long ago had very low entropy”, do you mean that the distribution of possible past states, conditioned on the present, is a nearly-flat distribution?

    It sounds to me like you’re just saying that (it’s harder to predict the past than to predict the future) = (entropy increases).

    Or is the difficulty that we don’t have priors on initial states? You might be saying that the set of starting states that we consider reasonable, is only a small subset of the set of possible starting states. There can only be a set of starting states that we consider reasonable, if we are secretly assigning priors to starting states. We don’t enter these priors on start states into the equations, so the output doesn’t reflect them.

  • http://timtyler.org/ Tim Tyler

    @Paul: Did you read my last comment? Informational entropy is one thing, and thermodynamic entropy is another. It may be quite possible to specify a heat-death-like state highly concisely if you are using a Turing machine for the specification.

  • ScentOfViolets

    Phil, there’s a lot of handwaving going on, as you suggest, and all sorts of appeals to a false analogy. Let me quote:

    I’m pointing out the difficulty of what
    counts as ‘improbable’. For example, if you look at just the egg, and a
    sample of dust and gas containing all the constituent atoms needed to make
    the egg, say, a sample containing 10^30 atoms in a volume of 10^30 cm^3,
    it’s very improbable that anything even remotely resembling an egg in any
    state will form, and then still much, much more improbable that a perfect,
    whole egg will form. A Boltzmann’s Egg vs. a Universal Egg if you will.

    But suppose that instead of 10^30 atoms, you have 10^60 atoms (in 10^60
    cm^3) to play
    with. Naively, the formation of a Boltzmann egg is still vanishingly
    improbable, the Universal egg much more so. However, what really happens
    is that under the effects of gravitation this nebulae of gas and dust
    condenses to form a sun with planets, life evolves, and so – practically
    in the blink of an eye as these time scales are reckoned – a perfect egg
    is formed. Under this scenario, the probability of forming a perfect
    unbroken egg is far, far higher than the probability of an imperfect,
    broken egg assembled by pure chance.

    So how much more improbable is the formation of a Universal egg as opposed
    to a Boltzmann’s egg?

    That’s the problem here. It’s not at all clear how to count states to
    perform the notional summing, no one really has a handle on what entropy
    means in a universe with certain types of gravity and certain geometries,
    or even for that matter on all the requisite basic physical laws. Without
    this knowledge, no one knows how contingent certain events are in these
    ensembles verses other events. Are they as strongly contingent as the
    formation of water molecules in a box filled with oxygen and hydrogen? As
    strongly contingent as the evolution example given above? And so on and
    so forth. Indeed, most physicists say that we’re nowhere near close
    enough to having enough of a handle on these problems to make any sorts of
    claims about things like the relative probabilities of Boltzmann’s Brains
    forming (I suppose that if you want to talk about a very restricted
    universe, one that is open, completely flat, has no gravity, and can be
    assumed to have a ‘statistically even’ distribution of matter one might be
    able to make some sort of halfway plausible statement. That’s not the
    universe we live in, however.)

    This is all well known inside the physics community, and for these
    reasons, most researchers discount the Boltzmann Brain argument. Those who
    don’t are considered outside of the mainstream. Note that no one is
    talking about whether or not the universe arose from a ‘statistical
    fluctuation’ per se. Most people in fact would be unsurprised to find out
    that our universe didn’t start out the way it did because of one of these
    problematic events. It’s just that you can’t use this kind of argument
    (at least, not yet) to discount the hypothesis.

    I hope that clears things up.

  • David

    Even if the hypothesis of a low-entropy origin of the universe was proved false, this would not invalidate thermodynamics or the 2nd Law. So clearly it is incorrect to say that thermodynamics somehow “depends” on the Past Hypothesis or that it is somehow “scandalous” that the cosmological implications of the 2nd Law are not explained in textbooks.

  • http://profile.typepad.com/robinhanson Robin Hanson

    It almost seems like people are iterating through all possible misunderstandings, suggesting I should have just ended the post with all possible disclaimers. 🙂

    Phil, I said statistical mechanics was bad at predicting the past, not thermodynamics; it gets teakettles wrong too.

    Scent, we know how to calculate the future, so calculating the past would be just as easy if the same approach applied.

    James, Scent is right about records.

    mjgeddes, tim is right; high entropy states can be very simply indicated.

  • ScentOfViolets

    Robin, this is false, weakly so in the first case[1], strongly so in the second. Please read what I write.

    [1]Unless you have, for example, the definitive explanation on whether the universe is open or closed, etc. You are confusing local, gravity-free cases with global, non-gravity free cases. Iow, don’t extrapolate from a gas enclosed inside a jar.

  • Jess Riedel

    Robin:

    Sean, the “past hypothesis” seems to me more of restatement of our failure than an actual solution. And I’d be more comfortable accepting it if I saw some formal calculations showing what it implied; the hand-waving makes me nervous.

    The low-entropy-past hypothesis is neither a full solution nor a restatement of our failure; it is a partial explanation. It is like starting with the question “Why is this room hot?” and getting the answer “because there is a thermostat connected to a heating device, which produces heat until the room is brought to a certain temperature”. Sure, this explanation prompts the question “why is the thermostat set so high?”, but that doesn’t mean the explanation is just a useless restatement of the question.

    In comparing the low-entropy-past hypothesis to the fictional “life appeared even though it seems unlikely” law, you ignore the substantial predictive power of the former.

  • ScentOfViolets

    You’ve got to be very careful how to set up the boundary conditions, even in a very simplified classical regime. For example, any collection of particles assembled in a finite volume of space, no matter how large, and allowed to expand outward for whatever length of time you care to specify into a vacuum of infinite space will be in a condition of ‘low entropy’ in comparison to later times. It’s also not very difficult to define conditions so that the total energy of the universe is zero. Thus, a localized ‘quantum fluctuation’ in infinite space and time would under these conditions quite nicely start out in a condition of ‘low’ entropy and propagate forward in time into the infinite future in a very natural way.

    I’m not endorsing this simplified model at all, btw. Just pointing out that reasoning by analogy to a more familiar situation can lead to apparently nonsensical results.

  • http://www.ciphergoth.org/ Paul Crowley

    @Tim Tyler: I follow that, but it doesn’t matter: obviously the concisely-describable high entropy states are a tiny fraction of their total number, so it’s *still* justified to assign a non-negligible prior to a low-entropy state if you’re using Solomonoff induction.

  • Stuart Armstrong

    Stuart, yes I recall Penrose’s flat past hypothesis; is it still considered viable?

    Hard to tell. It is very speculative, and Penrose is reaching the end of his career, and some people are clearly humouring him… But there is some weak evidence supporting it, and it does appear to be testable.

    And the mathematics appear correct – in the absence of the Ricci tensor, the slow, infinite, cold end of the universe is indistinguishable from the fast burning expansion of the big bang…

  • Tim Tyler

    Paul, the anthropic principle explains the low-entropy of the origin. That explanation trumps Occam’s razor. Even if low-thermodynamic states had long descriptions, we would still see them at the origin – since otherwise, observers would not have evolved.

  • http://TobyBartels.name/ Toby Bartels

    Perhaps the vagueness in the past hypothesis is the problem.

    Which seems to me to basically say that this anomaly is about as bad as it could possibly be.

    Let us say instead that the anomaly is exactly as bad as it could possibly be: the large-scale entropy of the universe at the Big Bang is zero. This is a very strong claim, and (like all extreme statements) it’s simple too. It’s too strong, compared to the evidence, to justify stating it without qualification, but as a hypothesis I like it.

    While I’m here, I’ll recommend Huw Price’s 1996 book Time’s Arrow and Archimedes’ Point on the arrow of time, although I don’t agree with Price about everything.

    And I agree that the silence about this, the idea that the second law is fully understood, is a scandal.

  • Pingback: Overcoming Bias : Still Scandalous Heat

  • Pingback: Overcoming Bias : From Eternity To Here

  • Pingback: Overcoming Bias : Tegmark’s Vast Math