Tag Archives: Physics

Irreducible Detail

Our best theories vary in generality. Some theories are very general, but most are more context specific. Putting all of our best theories together usually doesn’t let us make exact predictions on most variables of interest. We often express this fact formally in our models via “noise,” which represents other factors that we can’t yet predict.

For each of our theories there was a point in time when we didn’t have it yet. Thus we expect to continue to learn more theories, which will let us make more precise predictions. And so it might seem like we can’t constrain our eventual power of prediction; maybe we will have powerful enough theories to predict everything exactly.

But that doesn’t seem right either. Our best theories in many areas tell us about fundamental limits on our prediction abilities, and thus limits on how powerful future simple general theories could be. For example:

  • Thermodynamics – We can predict some gross features of future physical states, but the entropy of a system sets a very high (negentropy) cost to learn precise info about the state of that system. If thermodynamics is right, there will never be a general theory to let one predict future states more cheaply than this.
  • Finance – Finance theory has identified many relevant parameters to predict the overall distribution of future assets returns. However, finance theory strongly suggests that it is usually very hard to predict details of the specific future returns of specific assets. The ability to do so would be worth such a huge amount that there just can’t be many who often have such an ability. The cost to gain such an ability must usually be more than the gains from trading it.
  • Cryptography – A well devised code looks random to an untrained eye. As there are a great many possible codes, and a great many ways to find weaknesses in them, it doesn’t seem like there could be any general way to break all codes. Instead code breaking is a matter of knowing lots of specific things about codes and ways they might be broken. People use codes when they expect the cost of breaking them to be prohibitive, and such expectations are usually right.
  • Innovation – Economic theory can predict many features of economies, and of how economies change and grow. And innovation contributes greatly to growth. But economists also strongly expect that the details of particular future innovations cannot be predicted except at a prohibitive cost. Since knowing of innovations ahead of time can often be used for great private profit, and would speed up the introduction of those innovations, it seems that no cheap-to-apply simple general theories can exist which predict the details of most innovations well ahead of time.
  • Ecosystems – We understand some ways in which parameters of ecosystems correlate with their environments. Most of these make sense in terms of general theories of natural selection and genetics. However, most ecologists strongly suspect that the vast majority of the details of particular ecosystems and the species that inhabit them are not easily predictable by simple general theories. Evolution says that many details will be well matched to other details, but to predict them you must know much about the other details to which they match.

In thermodynamics, finance, cryptography, innovations, and ecosystems, we have learned that while there are many useful generalities, the universe is also chock full of important irreducible incompressible detail. As this is true at many levels of abstraction, I would add this entry to the above list:

  • Intelligence – General theories tell us what intelligence means, and how it can generalize across tasks and contexts. But most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours. Ordinary software also gets smart by containing many powerful modules. While the architecture that organizes those modules can make some difference, that difference is mostly small compared to having more better modules. In a world of competing software firms, most ways to improve modules or find new ones cost more than the profits they’d induce.

If most value in intelligence comes from the accumulation of many expensive parts, there may well be no powerful general theories to be discovered to revolutionize future AI, and give an overwhelming advantage to the first project to discover them. Which is the main reason that I’m skeptical about AI foom, the scenario where an initially small project quickly grows to take over the world.

Added 7p: Peter McCluskey has thoughtful commentary here.

GD Star Rating
loading...
Tagged as: , , , ,

Tegmark’s Vast Math

I recently had a surprise chance to meet Max Tegmark, and so I first quickly read his enjoyable new book The Mathematical Universe. It covers many foundations of physics topics that he correctly says are unfairly neglected. Since I’ve collected many opinions on foundation of physics over decades, I can’t resist mentioning the many ways I agree and disagree with him.

Let me start with what Tegmark presents as his main point, which is that the total universe is BIG, almost as big as it could possibly be. There’s a vast universe out there that we can’t see, and will never see. That is, not only does space extent far beyond our cosmological horizon, but out there are places where physics sits in very different equilibria of fundamental physics (e.g., has a different number of useful dimensions), and nearby are the different “many worlds” of quantum mechanics.

Furthermore, and this is Tegmark’s most unique point, there are whole different places “out there” completely causally (and spatially) disconnected from our universe, which follow completely different fundamental physics. In fact, all such mathematically describable places really exist, in the sense that any self-aware creatures there actually feel. Tegmark seems to stop short, however, of David Lewis, who said that all self-consistent possible worlds really exist.

Tegmark’s strongest argument for his distinctive claim, I think, is that we might find that the basic math of our physics is rare in allowing for intelligent life. In that case, the fact of our existence should make us suspect that many places with physics based on other maths are out there somewhere: Continue reading "Tegmark’s Vast Math" »

GD Star Rating
loading...
Tagged as:

A Future Of Pipes

Back in March I wrote:

Somewhere around 2035 or so … the (free) energy used per [computer] gate operation will fall to the level thermodynamics says is required to [logically] erase a bit of information. After this point, the energy cost per computation can only fall by switching to “reversible” computing designs, that only rarely [logically] erase bits. … Computer gates … today … in effect irreversibly erase many bits per gate operation. To erase fewer bits instead, gates must be run “adiabatically,” i.e., slow enough so key parameters can change smoothly. In this case, the rate of bit erasure per operation is proportional to speed; run a gate twice as slow, and it erases only half as many bits per operation. Once reversible computing is the norm, gains in making more smaller faster gates will have to be split, some going to let gates run more slowly, and the rest going to more operations. (more)

The future of computing, after about 2035, is adiabatic reservable hardware. When such hardware runs at a cost-minimizing speed, half of the total budget is spent on computer hardware, and the other half is spent on energy and cooling for that hardware. Thus after 2035 or so, about as much will be spent on computer hardware and a physical space to place it as will be spent on hardware and space for systems to generate and transport energy into the computers, and to absorb and transport heat away from those computers. So if you seek a career for a futuristic world dominated by computers, note that a career making or maintaining energy or cooling systems may be just as promising as a career making or maintaining computing hardware.

We can imagine lots of futuristic ways to cheaply and compactly make and transport energy. These include thorium reactors and superconducting power cables. It is harder to imagine futuristic ways to absorb and transport heat. So we are likely to stay stuck with existing approaches to cooling. And the best of these, at least on large scales, is to just push cool fluids past the hardware. And the main expense in this approach is for the pipes to transport those fluids, and the space to hold those pipes.

Thus in future cities crammed with computer hardware, roughly half of the volume is likely to be taken up by pipes that move cooling fluids in and out. And the tech for such pipes will probably be more stable than tech for energy or computers. So if you want a stable career managing something that will stay very valuable for a long time, consider plumbing.

Will this focus on cooling limit city sizes? After all, the surface area of a city, where cooling fluids can go in and out, goes as the square of city scale , while the volume to be cooled goes as the cube of city scale. The ratio of volume to surface area is thus linear in city scale. So does our ability to cool cities fall inversely with city scale?

Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city. That is, when you double the city size, such overheads increase by only a constant amount, instead of doubling.

For example, there is a fractal design for piping both smoothly flowing and turbulent cooling fluids where, holding constant the fluid temperature and pressure as well as the cooling required per unit volume, the fraction of city volume devoted to cooling pipes goes as the logarithm of the city’s volume. That is, every time the total city volume doubles, the same additional fraction of that volume must be devoted to a new kind of pipe to handle the larger scale. The pressure drop across such pipes also goes as the logarithm of city volume.

The economic value produced in a city is often modeled as a low power (greater than one) of the economic activity enclosed in that city. Since mathematically, for a large enough volume a power of volume will grow faster than the logarithm of volume, the greater value produced in larger cities can easily pay for their larger costs of cooling. Cooling does not seem to limit feasible city size. At least when there are big reservoirs of cool fluids like air or water around.

I don’t know if the future is still plastics. But I do know that a big chuck of it will be pipes.

Added 10Nov 4p:  Proof of “When such hardware runs …” : V = value, C = cost, N = # processors, s = speed run them at, p,q = prices. V = N*s, C = p*N + q*N*s2.  So C/V = p/s + q*s. Pick s to min C/V gives p = q*s2, so two parts of cost C are equal. Also, C/s = 2*sqrt(p*q).

GD Star Rating
loading...
Tagged as: , ,

Exemplary Futurism

Back in May there was a Starship Century Symposium in San Diego. I didn’t attend, but I later watched videos of most of the talks (here, here, book coming here). Many were about attempts by engineers and scientists to sketch out feasible designs for functioning starships. They’ve been at this for many decades, and have made some progress.

Most of us have seen many starships depicted in movies, and you might figure that since the future is so uncertain, fictional starships are our best guess to real future starships. Since no one can know anything, no one can beat fictional imagination. But that seems very wrong to me; we get a lot better insight into real starships from serious attempts to design them.

It is worth noting that these folks do futurism the way I say it should be done: making best combos.

Form best estimates on each variable one at a time, and then adjust each best estimate to take into account the others, until one has a reasonably coherent baseline combination: a set of variable values that each seem reasonable given the others. (more)

Even though this is the standard approach of historians, schedulers, and puzzle solvers, many express strong disapproval about doing futurism this way. Well at least when predicting social consequences. Starship designers don’t seem to get much flack. Why? I’d guess it is because they are high status. Starships and physicists are sexy enough that we forget to be politically correct, and just let experts do what seems best to them.

Some might say this is okay for engineering, because we know lots of engineering, but not ok for social things, because we know little there. But that is just wrong. Not only do we know lots about social things, this is still the right approach in areas of history and engineering where we know a lot less.

It is also worth noting that the usual starship vision mainly seem interesting if one expects familiar growth rates to continue for a while. A starship carrying humans would take about a decade or two in flight time, and a thousand times as much energy as the Apollo moon rockets. And today our economy doubles in about 15 years. So if energy capacity doubled with the economy, it would take about 150 years to get that capacity. Or since energy has doubled about every 25 years lately, it might take 250 years. But 150-250 years still seems culturally accessible to us; we feel we can relate to folks 200 years ago. Much more at least than to people 20,000 years ago.

But if growth rates either slow down or sleep up a lot, this doesn’t work. For example, if the economy doubled every thousand years, as it did during the farming era, then it would take ten thousand years to get enough capacity. And we feel much less related to people who will live ten thousand years in the future. We expect their culture to change so much that we are much less interested in stories about them, or in thinking about what they will do.

If growth rates instead speed up by the same factor, the economy would then double every three months. And then a decade long flight to another star would encompass forty doublings, or a factor of a trillion. At growth rates like that, a journey that  long just seems crazy. Before you’ve hardly left our system another much better ship is likely to whiz past you. And even if your ship gets there first, the civilization back home by then is likely to be culturally unrecognizable. A trillion size bigger economy is likely to be a very different place.

Of course fast growth can’t go on forever. So a fast growing economy will slow down eventually. And that is when it would make sense to take a decade long journey, when a decade doesn’t encompass that much cultural change back home. But such a post-fast-growth society will likely be so different from ours as to deflate most of our interest in thinking about their starships.

If our industry era growth rates continue on for several centuries, then we may have descendants capable of starlight, and culturally similar enough to us that we care a lot about them. But if growth rates either slow down or speed up a lot, the descendants who are finally willing and able to fly to the stars are likely to be so different from us that we are much less interested in them.

GD Star Rating
loading...
Tagged as: ,

Slowing Computer Gains

Whenever I see an article in the popular sci/tech press on the long term future of computing hardware, it is almost always on quantum computing. I’m not talking about articles on smarter software, more robots, or putting chips on most objects around us; those are about new ways to use the same sort of hardware. I’m talking about articles on how the computer chips themselves will change.

This quantum focus probably isn’t because quantum computing is that important to the future of computing, nor because readers are especially interested in distant futures. No, it is probably because quantum computing is sexy in academia, appearing often in top academic journals and university press releases. After all, sci/tech readers mainly want to affiliate with impressive people, or show they are up on the latest, not actually learn about the universe or the future.

If you search for “future of computing hardware”, you will mostly find articles on 3D hardware, where chips are in effect layered directly in top of one another, because chip makers are running into limits to making chip features smaller. This makes sense, as that seems the next big challenge for hardware firms.

But in fact the rest of the computer world is still early in the process of adjusting to the last big hardware revolution: parallel computing. Because of dramatic slowdowns in the last decade of chip speed gains, the computing world must get used to writing a lot more parallel software. Since that is just harder, there’s a real economic sense in which computer hardware gains have slowed down lately.

The computer world may need to make additional adaptations to accommodate 3D chips, as just breaking a program into parallel processes may not be enough; one may also have to to keep relevant memory closer to each processor to achieve the full potential of 3D chips. The extra effort to go into 3D and make these adaptations suggests that the rate of real economic gains from computer hardware will slow down yet again with 3D.

Somewhere around 2035 or so, an even bigger revolution will be required. That is about when the (free) energy used per gate operations will fall to the level thermodynamics says is required to erase a bit of information. After this point, the energy cost per computation can only fall by switching to “reversible” computing designs, that only rarely erase bits. See (source):

PowerTrend

Computer operations are irreversible, and use (free) energy to in effect erase bits, when they lack a one-to-one mapping between input and output states. But any irreversible mapping can be converted to a reversible one-to-one mapping by saving its input state along with its output state. Furthermore, a clever fractal trick allows one to create a reversible version of any irreversible computation that takes exactly the same time, costing only a logarithmic-in-time overhead of extra parallel processors and memory to reversibly erase intermediate computing steps in the background (Bennett 1989).

Computer gates are usually designed today to change as rapidly as possible, and as a result in effect irreversibly erase many bits per gate operation. To erase fewer bits instead, gates must be run “adiabatically,” i.e., slowly enough so key parameters can change smoothly. In this case, the rate of bit erasure per operation is proportional to speed; run a gate twice as slowly, and it erases only half as many bits per operation (Younis 1994).

Once reversible computing is the norm, gains in making more smaller faster gates will have to be split, some going to let gates run more slowly, and the rest going to more operations. This will further slow the rate at which the world gains more economic value from computers. Sometime much further in the future, quantum computing may be feasible enough so it is sometimes worth using special quantum processors inside larger ordinary computing systems. Fully quantum computing is even further off.

My overall image of the future of computing is of continued steady gains at the lowest levels, but with slower rates of economic gains after each new computer hardware revolution. So the “effective Moore’s law” rate of computer capability gains will slow in discrete steps over the next century or so. We’ve already seen a slowdown from a need for parallelism, and within the next decade or so we’ll see more slowdown from a need to adapt to 3D chips. Then about 2030 or so we’ll see a big reversibility slowdown due to a need to divide part gains between more operations and using less energy per operation.

Overall though, I doubt the rate of effective gains will slow down by more than a factor of four over the next half century. So, whatever you might have thought could happen in 50 years if Moore’s law had continued steadily, is pretty likely to happen within 200 years. And since brain emulation is already nicely parallel, including with matching memory usage, I doubt the relevant rate of gains there will slow by much more than a factor of  two.

GD Star Rating
loading...
Tagged as: , ,

Functions /= Tautologies

Bryan:

Calling the mind a computer is just a metaphor – and using metaphors to infer literal truths about the world is a fallacy.

Me:

I’m saying that your mind is literally a signal processing system. … While minds have a great many features, a powerful theory, in fact our standard theory, to explain the mix of features we see associated with minds, is that minds fundamentally function to process signals, and that brains are the physical devices that achieve that function.

Bryan:

The “standard theories of minds as signal processors” that Robin refers to aren’t theories at all. They’re just eccentric tautologies. As Robin has frankly admittedly to me several times, he uses the term “signal processors” so broadly that everything whatsoever is a signal processor. On Robin’s terms, a rock is a signal processor. What “signals” do rocks “process”? By moving or not moving, rocks process signals about the mass and distance of other objects in the universe.

Consider an analogy. Our theory of table legs is that they function mainly for structural support; table legs hold up tables. Yes, anything can be analyzed for the structural support it provides, and most objects can be arranged to as to provide some degree of structural support to something else. But that doesn’t make our theories of structural support tautologies. Our theories can tell us how efficient and effective any given arrangement of objects is at achieving this function. It we believe that something was designed to be a table leg, our theories of structural support make predictions about what sort of object arrangement it will be. And if our table is missing a leg, such theories recommend object arrangements to use as a substitute table leg.

Similarly, while any object arrangement can be analyzed in terms of the signals it sends out and the ways that it transforms incoming signals into outgoing signals, all of these do not function equally well as signal processors. If we know that something was designed as a signal processor, and know something about the kinds of signals it was designed to process for what purposes, then our theories of signal processing make predictions about how this thing will be designed. And if we find ourselves missing a part of a signal processor, such theories tell us what sort of replacement part(s) can efficiently restore the signaling function.

Animal brains evolved to direct animal actions. Fish, for example, swim toward prey and away from predators. So fish brains need to take in external signals about the locations of other fish, and process those signals into useful directions to give muscles about how to change the direction and intensity of swimming. This makes all sorts of predictions about how fish brains will be designed by evolution.

Human brains evolved to achieve many more functions than to merely to direct our speed and direction of motion. But we understand many of those functions in quite some detail, and that understanding implies many predictions about how human brains are efficiently designed to simultaneously achieve these functions.

This same combination of general signal processing theory and specific understandings about the functions evolution designed human brains to perform also implies predictions on how to substitute wholesale for human brain functions. For example, knowing that brain cells function mainly to take signals coming from other cells, transform them, and pass them on to other cells, implies predictions on what cell details one needs to emulate to replicate the signaling function of a human brain cell. It also makes predictions like:

In order manage its intended input-output relation, a single processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. (more)

All of which goes to show that signal processing theory is far from a tautology, even if every object can be seen as in some way processing signals.

GD Star Rating
loading...
Tagged as: ,

Theories vs. Metaphors

I have said things like:

We should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor.

Bryan Caplan says I make:

the Metaphorical Fallacy. Its general form:

1. X is metaphorically Y.

2. Y is literally Z.

3. Therefore, X is literally Z.

…. To take a not-so-random example, … Robin says many crazy things … like:

1. The human mind is a computer.

2. Computers’ data can be uploaded to another computer.

3. Therefore, the human mind can be uploaded to a computer.

No, I’m pretty sure that I’m saying that your mind is literally a signal processing system. Not just metaphorically; literally. That is, while minds have a great many features, a powerful theory, in fact our standard theory, to explain the mix of features we see associated with minds, is that minds fundamentally function to process signals, and that brains are the physical devices that achieve that function. And our standard theories of how physical devices achieve signal processing functions predicts that we can replicate, or “emulate”, the same signal processing functions in quite different physical devices. In fact, such theories tell us how to replicate such functions in other devices.

Of course you can, like Bryan, disagree with our standard theory that the main function of minds is to process signals. Or you could disagree with our standard theories of how that function is achieved by physical devices. Or you could note that since the brain is a signal processor of unparalleled complexity, we are a long way away from knowing how to replicate it in other physical hardware.

But given how rich and well developed are our standard theories of minds as signal processors, signal processors in general, and the implementation of signal processors in physical hardware, it hardly seems fair to reject my conclusion based on a mere “metaphor.”

GD Star Rating
loading...
Tagged as: ,

Murphy on Growth Limits

Physicist Tom Murphy says he argued with “an established economics professor from a prestigious institution,” on whether economic growth can continue forever. They both agreed to assume Earth-bound economies, and quickly also agreed that total energy usage must reach an upper bound within centuries, because of Earth’s limited ability to discard waste heat via radiation.

Murphy then argued that economic growth cannot grow exponentially if any small but necessary part of the economy is fails to grow, or if any small fraction of people fail to give value to the things that do grow:

Not everyone will want to live this virtual existence. … Many would prefer the smell of real flowers. … You might be able to simulate all these things, but not everyone will want to live an artificial life. And as long as there are any holdouts, the plan of squeezing energy requirements to some arbitrarily low level fails. …

Energy today is roughly 10% of GDP. Let’s say we cap the physical amount available each year at some level, but allow GDP to keep growing. … Then in order to have real GDP growth on top of flat energy, the fractional cost of energy goes down relative to the GDP as a whole. … But if energy became arbitrarily cheap, someone could buy all of it. … There will be a floor to how low energy prices can go as a fraction of GDP. … So once our fixed annual energy costs 1% of GDP, the 99% remaining will find itself stuck. If it tries to grow, energy prices must grow in proportion and we have monetary inflation, but no real growth. …

Chefs will continue to innovate. Imagine a preparation/presentation 400 years from now that would blow your mind. … No more energy, no more ingredients, yet of increased value to society. … [But] Keith plopped the tuna onto the bread in an inverted container-shaped lump, then put the other piece of bread on top without first spreading the tuna. … I asked if he intended to spread the tuna before eating it. He looked at me quizzically, and said—memorably, “It all goes in the same place.” My point is that the stunning presentation of desserts will not have universal value to society. It all goes in the same place, after all. (more; HT Amara Graps)

While I agree with Murphy’s conclusion that the utility an average human-like mind gains from their life cannot increase exponentially forever, Murphy’s arguments for that conclusion are wrong. In particular, if only a fixed non-zero fraction of such minds could increase their utility exponentially, the average utility would also increase exponentially.

Also, the standard power law (Cobb-Douglas) functional form for how utility depends on several inputs says that utility can grow without bound when one sector of the economy grows without bound, even when another needed sector does not grow at all and takes a fixed fraction of income. For example, if utility U is given by U = Ea N1-a, where E is energy and N is non-energy, then at competitive prices the fraction of income going to the energy sector is fixed at a, no matter how big N gets. So N can grow without bound, making U grow without bound, while E is fixed.

My skepticism on exponential growth is instead based on an expectation of strongly diminishing returns to everything, including improved designs:

Imagine that … over the last million years they’ve also been searching the space of enjoyable virtual reality designs. From the very beginning they had designs offering people vast galaxies of fascinating exotic places to visit, and vast numbers of subjects to command. (Of course most of that wasn’t computed in much detail until the person interacted with related things.) For a million years they have searched for possible story lines to create engaging and satisfying experiences in such vast places, without requiring more computational resources behind the scenes to manage.

Now in this context, imagine what it means for “imagination” to improve by 4% per year. That is a factor of a billion every 529 years. If we are talking about utility gains, this means that you’d be indifferent between keeping a current virtual reality design, or taking a one in a two billion chance to get a virtual reality design from 529 years later. If you lose this gamble, you have to take a half-utility design, which gives you only half of the utility of the design you started with. …

It may be possible to create creatures who have such strong preferences for subtle differences, differences that can only be found after a million or trillion years of a vast galactic or larger civilization searching the space of possible designs. But humans do not seem remotely like such creatures. (more)

Neither mass, nor energy usage, nor population, nor utility per person for fixed mass and energy can grow exponentially forever.

GD Star Rating
loading...
Tagged as: ,

Henson On Ems

Keith Henson, of whom I’ve long been a fan, has a new article where he imagines our descendants as fragmenting Roman-Empire-like into distinct cultures, each ~300 meter spheres holding ~30 million ems each ~1 million times faster than a human, using ~1TW of power, in the ocean for cooling. The 300m radius comes from a max two subjective seconds of communication delay, and the 30 million number comes from assuming a shell of ~10cm cubes, each an em. (Quotes below)

The 10cm size could be way off, but the rest is reasonable, at least given Henson’s key assumptions that 1) competition to seem sexy would push ems to run as fast as feasible, and 2) the scale of em “population centers” and culture is set by the distance at which talk suffers a two subjective seconds delay.

Alas those are pretty unreasonable assumptions. Ems don’t reproduce via sex, and would be selected for not devoting lots of energy to sex. Yes, sex is buried deep in us, so ems would still devote some energy to it. But not so much as to make sex the overwhelming factor that sets em speeds. Not given em econ competitive pressures and the huge selection factors possible. I’m sure it is sexy today to spend money like a billionaire, but most people don’t because they can’t afford to. Since running a million times faster should cost a million times more, ems might not be able to afford that either.

Also, the scale at which we can talk without delay has just not been that important historically in setting our city and culture scales. We had integrated cultures even when talking suffered weeks of delay, we now have many cultures even though we can all talk without much delay, and city scales have been set more by how far we can commute in an hour than by communication delays. So while ems might well have a unit of organization corresponding to their easy-talk scale, important interactions should also exist at larger scales.

Those promised quotes from Henson’s article: Continue reading "Henson On Ems" »

GD Star Rating
loading...
Tagged as: , ,

Turbulence Contrarians

A few months ago I came across an intriguing contrarian theory:

Hydrogravitional-dynamics (HGD) cosmology … predicts … Earth-mass planets fragmented from plasma at 300 Kyr [after the big bang]. Stars promptly formed from mergers of these gas planets, and chemicals C, N, O, Fe etc. were created by the stars and their supernovae. Seeded gas planets reduced the oxides to hot water oceans [at 2 Myr], … [which] hosted the first organic chemistry and the first life, distributed to the 1080 planets of the cosmological big bang by comets. … The dark matter of galaxies is mostly primordial planets in proto globular star cluster clumps, 30,000,000 planets per star (not 8!). (more)

Digging further, I found that these contrarians have related views on the puzzlingly high levels of mixing found in oceans, atmospheres, and stars. For example, some invoke fish swimming to explain otherwise puzzling high levels of ocean water mixing. These turbulence contrarians say that most theorists neglect an important long tail of rare bursts of intense turbulence, each followed by long-lasting “contrails.” These rare bursts not only mix oceans and atmospheres, they also supposedly create a more rapid clumping of matter in the early universe, leading to more earlier nomad planets (not tied to stars), which could then lead to early life and its rapid spread.

I didn’t understand turbulence well enough to judge these theories, so I set it all aside. But over the last few months I’ve noticed many reports about puzzling numbers and locations of planets:

What has puzzled observers and theorists so far is the high proportion of planets — roughly one-third to one-half — that are bigger than Earth but smaller than Neptune. … Furthermore, most of them are in tight orbits around their host star, precisely where the modellers say they shouldn’t be. (more)

Last year, researchers detected about a dozen nomad planets, using a technique called gravitational microlensing, which looks for stars whose light is momentarily refocused by the gravity of passing planets. The research produced evidence that roughly two nomads exist for every typical, so-called main-sequence star in our galaxy. The new study estimates that nomads may be up to 50,000 times more common than that. (more)

This new study was theoretical. It used a best fit power law fit to the distribution of nomad planet microlensing observations to predict ~60 Pluto sized or larger nomad planets per star.  When projected down to the comet scale, this power law actually matches known bounds on comet density. The 95% c.l. upper bound for the power law parameter gives 100,000 such wandering Plutos or larger per star.

I take all this as weak support for something in the direction of these contrarian theories – there are more nomad planets than theorists expected, and some of that may come from neglect of early universe turbulence. But thirty million nomad Plutos per star still seems pretty damn unlikely.

FYI, here is part of an email I sent the authors in mid December, as yet unanswered: Continue reading "Turbulence Contrarians" »

GD Star Rating
loading...
Tagged as: , , ,