Turbulence Contrarians

A few months ago I came across an intriguing contrarian theory:

Hydrogravitional-dynamics (HGD) cosmology … predicts … Earth-mass planets fragmented from plasma at 300 Kyr [after the big bang]. Stars promptly formed from mergers of these gas planets, and chemicals C, N, O, Fe etc. were created by the stars and their supernovae. Seeded gas planets reduced the oxides to hot water oceans [at 2 Myr], … [which] hosted the first organic chemistry and the first life, distributed to the 1080 planets of the cosmological big bang by comets. … The dark matter of galaxies is mostly primordial planets in proto globular star cluster clumps, 30,000,000 planets per star (not 8!). (more)

Digging further, I found that these contrarians have related views on the puzzlingly high levels of mixing found in oceans, atmospheres, and stars. For example, some invoke fish swimming to explain otherwise puzzling high levels of ocean water mixing. These turbulence contrarians say that most theorists neglect an important long tail of rare bursts of intense turbulence, each followed by long-lasting “contrails.” These rare bursts not only mix oceans and atmospheres, they also supposedly create a more rapid clumping of matter in the early universe, leading to more earlier nomad planets (not tied to stars), which could then lead to early life and its rapid spread.

I didn’t understand turbulence well enough to judge these theories, so I set it all aside. But over the last few months I’ve noticed many reports about puzzling numbers and locations of planets:

What has puzzled observers and theorists so far is the high proportion of planets — roughly one-third to one-half — that are bigger than Earth but smaller than Neptune. … Furthermore, most of them are in tight orbits around their host star, precisely where the modellers say they shouldn’t be. (more)

Last year, researchers detected about a dozen nomad planets, using a technique called gravitational microlensing, which looks for stars whose light is momentarily refocused by the gravity of passing planets. The research produced evidence that roughly two nomads exist for every typical, so-called main-sequence star in our galaxy. The new study estimates that nomads may be up to 50,000 times more common than that. (more)

This new study was theoretical. It used a best fit power law fit to the distribution of nomad planet microlensing observations to predict ~60 Pluto sized or larger nomad planets per star.  When projected down to the comet scale, this power law actually matches known bounds on comet density. The 95% c.l. upper bound for the power law parameter gives 100,000 such wandering Plutos or larger per star.

I take all this as weak support for something in the direction of these contrarian theories – there are more nomad planets than theorists expected, and some of that may come from neglect of early universe turbulence. But thirty million nomad Plutos per star still seems pretty damn unlikely.

FYI, here is part of an email I sent the authors in mid December, as yet unanswered:

The argument [of yours] I’ve found most persuasive and understandable is presented here: http://arxiv.org/pdf/astro-ph/9904260v1 It says that the usual calculations based on observations suggesting little turbulence in oceans, atmospheres etc. neglect the fact that a tiny fraction of space-time volume where it happens can dominate overall mixing rates. The fact that real mixing seems to be vastly larger does support your claims that rare turbulence is an important contribution to real mixing.

But the argument I most want to evaluate is your claim that planet sized objects formed quickly after the universe first turned transparent to light. That seems to be based on a claim that there are gravity instability modes where density increases locally yet pressure remains constant across space. A little searching finds papers like this: http://dx.doi.org/10.1016/j.physleta.2007.09.069 that do explicit perturbation analysis yet don’t find such modes. Computer simulations also don’t seem to find them. Your response seems to be that these neglect non-linearities and the complexity of turbulence. Yet the arguments you give, at least the ones that I have found, such as in http://arxiv.org/pdf/astro-ph/0610628v1 , seem to be simple perturbation arguments. I keep wondering: what complexity-respecting theory are they perturbations of?

Somehow you seem to be postulating a small fraction of turbulence at the transparency transition that had very disproportionate effects in promoting gravitational instabilities. You have in mind some model of the average changes under rare turbulence that support unstable constant pressure density fluctuations. Yet most cosmologists don’t see those as possible. So where exactly is your most detailed argument for the existence of such instabilities in a scenario of mild and/or rare turbulence?

I also never got an answer to this one:

This paper: http://pubs.giss.nasa.gov/docs/1993/1993_Goldman_Canuto.pdf
gives the sort of more detailed calculation that I was looking to find in your papers.
How much does your analysis disagree with their analysis?

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • Cyan

    The dark matter of galaxies is mostly primordial planets in proto globular star cluster clumps, 30,000,000 planets per star (not 8!)

    A quick peek at Wikipedia suggests that baryonic dark matter is not plausible because it implies far less deuterium than is actually observed. Do the authors address this issue?

    Also, cold dark matter seems to have been falsified (nearly concurrently with the date on the paper you link) by observations of the dark matter distribution in dwarf galaxies. Cold dark matter is predicted to clump in the center; actual dark matter distributions are far too smooth. (via)

    • http://www.uweb.ucsb.edu/~criedel/ Jess Riedel

      Also, cold dark matter seems to have been falsified (nearly concurrently with the date on the paper you link) by observations of the dark matter distribution in dwarf galaxies.

      The cusp problem is a real one for the simplest models of cold dark matter, but the general idea has hardly been falsified. With strong theoretical support in several places—and no distinctly better alternatives—cold WIMPs remain the leading contender for dark matter. The difficulty (as always, with this sort of stuff) is the strongly model-dependent connection between observation and theory. There are many plausible mechanisms to evade the cusp problem while preserving the key aspects of cold WIMPs, and our limited observations don’t tell us whether any of them are right.

      • Cyan

        In the post I linked in my “via” it is claimed that cold dark matter also predicts more small satellite/dwarf galaxies than are actually observed (further discussed in this post ). What do you think?

  • http://www.gwern.net gwern

    Just passing by; “Comparing face-to-face meetings, nominal groups, Delphi and prediction markets on an estimation task” Graefe & Armstrong 2011 http://dl.dropbox.com/u/5317066/2011-graefe.pdf

    We recruited 227 participants (11 groups per method) who were required to solve a quantitative judgment task that did not involve distributed knowledge. This task consisted of ten factual questions, which required percentage estimates. While we did not find statistically significant differences in accuracy between the four methods overall, the results differed somewhat at the individual question level. Delphi was as accurate as FTF for eight questions and outperformed FTF for two questions. By comparison, prediction markets did not outperform FTF for any of the questions and were inferior for three questions. The relative performances of nominal groups and FTF were mixed and the differences were small. We also compared the results from the three structured approaches to prior individual estimates and staticized groups. The three structured approaches were more accurate than participants’ prior individual estimates. Delphi was also more accurate than staticized groups. Nominal groups and prediction markets provided little additional value relative to a simple average of the forecasts…The participants rated personal communications more favorably than computer-mediated interactions. The group interactions in FTF and nominal groups were perceived as being highly cooperative and effective. Prediction markets were rated least favourably: prediction market participants were least satisfied with the group process and perceived their method as the most difficult.

  • Jack the Second

    I didn’t understand turbulence well enough to judge these theories, so I set it all aside. But over the last few months I’ve noticed many reports about puzzling numbers and locations of planets:

    What has puzzled observers and theorists so far is the high proportion of planets — roughly one-third to one-half — that are bigger than Earth but smaller than Neptune. … Furthermore, most of them are in tight orbits around their host star, precisely where the modellers say they shouldn’t be.

    There’s a heavy amount of selection bias here: because of the way our current methods of detecting planets work, we can basically only detect massive planets close to stars. Of course most of the planets are in tight orbits around stars: planets close to stars move quicker, which make them detectable faster — we’re looking for changes in the stars appearance. Of course most of the planets are fairly massive: they have a bigger impact on their parent star, so we can detect them easier.

    The planets we discovered so far are certainly not what we would expect from a random sample of planets in the galaxy (at least, what we think a random sample of planets would look like). But they aren’t a random sample of planets in the galaxy. They do look roughly like we would expect planets detectable by the methods we’re using to look like.

  • Modernjan

    I’m a physics major but I’m having difficulty reading this but I do have some remarks. Cyan is on to something, baryonic dark matter is unlikely to make up a large percentage of dark matter. Also, the “abundance” of large planets, in low orbits is generally regarded as selection bias by astronomers, because planets that are smaller and/or locked in higher orbits are simply more difficult to detect.