Tag Archives: Engineering

Monster Pumps

Yesterday’s Science has a long paper on an exciting new scaling law. For a century we’ve known that larger organisms have lower metabolisms, and thus lower growth rates. Metabolism goes as size to the power of 3/4 over at least twenty orders of magnitude:


So our largest organisms have a per-mass metabolism one hundred thousand times lower than our smallest organisms.

The new finding is that local metabolism also goes as local biomass density to the power of roughly 3/4, over at least three orders of magnitude. This implies that life in dense areas like jungles is just slower and lazier on average than is life in sparse areas like deserts. And this implies that the ratio of predator to prey biomass is smaller in jungles compared to deserts.

When I researched how to cool large em cities I found that our best cooling techs scale quite nicely, and so very big cities need only pay a small premium for cooling compared to small cities. However, I’d been puzzled about why biological organisms seem to pay much higher premiums to be large. This new paper inspired me to dig into the issue.

What I found is that human engineers have figured ways to scale large fluid distribution systems that biology has just never figured out. For example, the hearts that pump blood through animals are periodic pumps, and such pumps have the problem that the pulses they send through the blood stream can reflect back from joints where blood vessels split into smaller vessels. There are ways to design joints to eliminate this, but those solutions create a total volume of blood vessels that doesn’t scale well. Another problem is that blood vessels taking blood to and from the heart are often near enough to each other to leak heat, which can also create a bad scaling problem.

The net result is that big organisms on Earth are just noticeably sluggish compared to small ones. But big organisms don’t have to be sluggish, that is just an accident of the engineering failures of Earth biology. If there is a planet out there where biology has figured out how to efficiently scale its blood vessels, such as by using continuous pumps, the organisms on that planet will have fewer barriers to growing large and active. Efficiently designed large animals on Earth could easily have metabolisms that are thousands of times faster than in existing animals. So, if you don’t already have enough reasons to be scared of alien monsters, consider that they might have far faster metabolisms, and also very large.

This seems yet another reason to think that biology will soon be over. Human culture is inventing so many powerful advances that biology never found, innovations that are far easier to integrate into the human economy than into biological designs. Descendants that integrate well into the human economy will just outcompete biology.

I also spend a little time thinking about how one might explain the dependence of metabolism on biomass density. I found I could explain it by assuming that the more biomass there is in some area, the less energy each biomass gets from the sun. Specifically, I assume that the energy collected from the sun by the biomass in some area has a power law dependence on the biomass in that area. If biomass were very efficiently arranged into thin solar collectors then that power would be one. But since we expect some biomass to block the view of other biomass, a problem that gets worse with more biomass, the power is plausibly less than one. Let’s call a this power that relates biomass density B to energy collected per area E. As in E = cBa.

There are two plausible scenarios for converting energy into new biomass. When the main resource need to make new biomass via metabolism is just energy to create molecules that embody more energy in their arrangement, then M = cBa-1, where M is the rate of production of new biomass relative to old biomass. When new biomass doesn’t need much energy, but it does need thermodynamically reversible machinery to rearrange molecules, then M = cB(a-1)/2. These two scenarios reproduce the observed 3/4 power scaling law when a = 3/4 and 1/2 respectively. When making new biomass requires both simple energy and reversible machinery, the required power a is somewhere between 1/2 and 3/4.

Added 14Sep: On reflection and further study, it seems that biologists just do not have a good theory for the observed 3/4 power. In addition, the power deviates substantially from 3/4 within smaller datasets.

GD Star Rating
Tagged as: , ,

Signal Mappers Decouple

Andrew Sullivan notes that Tim Lee argues that ems (whole brain emulations) just won’t work:

There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson … fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems. … Digital computers … were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. … Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. … We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate. … Each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. (more; Eli Dourado agrees; Alex Waller disagrees.)

Human brains were not designed by humans, but they were designed. Evolution has imposed huge selection pressures on brains over millions of years, to perform very particular functions. Yes, humans use more math that does natural selection to assist them. But we should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor. The weather is not a designed signal processor, so it does not achieve such decoupling. Let me explain.

A signal processor is designed to mantain some intended relation between particular inputs and outputs. All known signal processors are physical systems with vastly more degrees of freedom than are contained in the relevant inputs they seek to receive, the outputs they seek to send, or the sorts of dependencies between input and outputs they seek to maintain. So in order manage its intended input-output relation, a single processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. Really, just ask most any signal-process hardware engineer.

Now sometimes random inputs can be useful in certain signal processing strategies, and this can be implemented by coupling certain parts of the system to most any random degrees of freedom. So signal processors don’t always want to minimize extra couplings. But this is a rare exception to the general need to decouple.

The bottom line is that to emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system.

This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded. Artificial ears and eyes do not require a detailed weather-forecasting-like simulation of the vast complex physical systems that are our ears and eyes. Yes, such artificial organs do not exactly reproduce the input-output relations of their biological counterparts. I expect someone with one artificial ear and one real ear could tell the difference. But the reproduction is close enough to allow the artificial versions to perform most of the same practical functions.

We are confident that the number of relevant signal dimensions in a human brain is vastly smaller than its physical degrees of freedom. But we do not know just how many are those dimensions. The more dimensions, the harder it will be to emulate them. But the fact that human brains continue to function with nearly the same effectiveness when they are whacked on the side of the head, or when flooded with various odd chemicals, shows they have been designed to decouple from most other physical brain dimensions.

The brain still functions reasonably well even flooded with chemicals specifically designed to interfere with neurotransmitters, the key chemicals by which neurons send signals to each other! Yes people on “drugs” don’t function exactly the same, but with moderate drug levels people can still perform most of the functions required for most jobs.

Remember, my main claim is that whole brain emulation will let machines substitue for humans through the vast majority of the world economy. The equivalent of human brains on mild drugs should be plenty sufficient for this purpose – we don’t need exact replicas.

Added 7p: Tim Lee responds:

Hanson seems to be making a different claim here than he made in his EconTalk interview. There his claim seemed to be that we didn’t need to understand how the brain works in any detail because we could simply scan a brain’s neurons and “port” them to a silicon substrate. Here, in contrast, he’s suggesting that we determine the brain’s “key internal signal dimensions and their internal mappings” and then build a digital system that replicates these higher-level functions. Which is to say we do need to understand how the brain works in some detail before we can duplicate it computationally. …

Biologists know a ton about proteins. … Yet despite all our knowledge, … general protein folding is believed to be computationally intractible. … My point is that even detailed micro-level knowledge of a system doesn’t necessarily give us the capacity to efficiently predict its macro-level behavior. … By the same token, even if we had a pristine brain scan and a detailed understanding of the micro-level properties of neurons, there’s no good reason to think that simulating the behavior of 100 billion neurons will ever be computationally tractable.

My claim is that, in order to create economically-sufficient substitutes for human workers, we don’t need to understand how the brain works beyond having decent models of each cell type as a signal processor. Like the weather, protein folding is not designed to process signals and so does not have the decoupling feature I describe above. Brain cells are designed to process signals in the brain, and so should have a much simplified description in signal processing terms. We already have pretty good signal-processing models of some cell types; we just need to do the same for all the other cell types.

GD Star Rating
Tagged as: , , , ,