Tag Archives: Biology

Monster Pumps

Yesterday’s Science has a long paper on an exciting new scaling law. For a century we’ve known that larger organisms have lower metabolisms, and thus lower growth rates. Metabolism goes as size to the power of 3/4 over at least twenty orders of magnitude:


So our largest organisms have a per-mass metabolism one hundred thousand times lower than our smallest organisms.

The new finding is that local metabolism also goes as local biomass density to the power of roughly 3/4, over at least three orders of magnitude. This implies that life in dense areas like jungles is just slower and lazier on average than is life in sparse areas like deserts. And this implies that the ratio of predator to prey biomass is smaller in jungles compared to deserts.

When I researched how to cool large em cities I found that our best cooling techs scale quite nicely, and so very big cities need only pay a small premium for cooling compared to small cities. However, I’d been puzzled about why biological organisms seem to pay much higher premiums to be large. This new paper inspired me to dig into the issue.

What I found is that human engineers have figured ways to scale large fluid distribution systems that biology has just never figured out. For example, the hearts that pump blood through animals are periodic pumps, and such pumps have the problem that the pulses they send through the blood stream can reflect back from joints where blood vessels split into smaller vessels. There are ways to design joints to eliminate this, but those solutions create a total volume of blood vessels that doesn’t scale well. Another problem is that blood vessels taking blood to and from the heart are often near enough to each other to leak heat, which can also create a bad scaling problem.

The net result is that big organisms on Earth are just noticeably sluggish compared to small ones. But big organisms don’t have to be sluggish, that is just an accident of the engineering failures of Earth biology. If there is a planet out there where biology has figured out how to efficiently scale its blood vessels, such as by using continuous pumps, the organisms on that planet will have fewer barriers to growing large and active. Efficiently designed large animals on Earth could easily have metabolisms that are thousands of times faster than in existing animals. So, if you don’t already have enough reasons to be scared of alien monsters, consider that they might have far faster metabolisms, and also very large.

This seems yet another reason to think that biology will soon be over. Human culture is inventing so many powerful advances that biology never found, innovations that are far easier to integrate into the human economy than into biological designs. Descendants that integrate well into the human economy will just outcompete biology.

I also spend a little time thinking about how one might explain the dependence of metabolism on biomass density. I found I could explain it by assuming that the more biomass there is in some area, the less energy each biomass gets from the sun. Specifically, I assume that the energy collected from the sun by the biomass in some area has a power law dependence on the biomass in that area. If biomass were very efficiently arranged into thin solar collectors then that power would be one. But since we expect some biomass to block the view of other biomass, a problem that gets worse with more biomass, the power is plausibly less than one. Let’s call a this power that relates biomass density B to energy collected per area E. As in E = cBa.

There are two plausible scenarios for converting energy into new biomass. When the main resource need to make new biomass via metabolism is just energy to create molecules that embody more energy in their arrangement, then M = cBa-1, where M is the rate of production of new biomass relative to old biomass. When new biomass doesn’t need much energy, but it does need thermodynamically reversible machinery to rearrange molecules, then M = cB(a-1)/2. These two scenarios reproduce the observed 3/4 power scaling law when a = 3/4 and 1/2 respectively. When making new biomass requires both simple energy and reversible machinery, the required power a is somewhere between 1/2 and 3/4.

Added 14Sep: On reflection and further study, it seems that biologists just do not have a good theory for the observed 3/4 power. In addition, the power deviates substantially from 3/4 within smaller datasets.

GD Star Rating
Tagged as: , ,

More Whales Please

I was struck by this quote in the paper cited in my last post:

The biosphere considered as a whole has managed to expand the amount of solar energy captured for metabolism to around 5%, limited by the nonuniform presence of key nutrients across the Earth’s surface — primarily fresh water, phosphorus, and nitrogen. Life on Earth is not free-energy-limited because, up until recently, it has not had the intelligence and mega-engineering to distribute Earth’s resources to all of the places solar energy happens to fall, and so it is, in most places, nutrient-limited. (more)

That reminded me of reading earlier this year about how whale poop was once a great nutrient distributor:

A couple of centuries ago, the southern seas were packed with baleen whales. Blue whales, the biggest creatures on Earth, were a hundred times more plentiful than they are today. Biologists couldn’t understand how whales could feed themselves in such an iron-poor environment. And now we may have an answer: Whales are extraordinary recyclers. What whales consume (which is a lot), they give back. (more)

It seems we should save (and expand) the whales because of their huge positive externality on other fish. If humans manage to increase the fraction of solar energy used by life on Earth, it will be primarily because of trade and transport. Transport gives us the ability to move lots of nutrients, and trade gives us the incentives to move them.

GD Star Rating
Tagged as:

Irreducible Detail

Our best theories vary in generality. Some theories are very general, but most are more context specific. Putting all of our best theories together usually doesn’t let us make exact predictions on most variables of interest. We often express this fact formally in our models via “noise,” which represents other factors that we can’t yet predict.

For each of our theories there was a point in time when we didn’t have it yet. Thus we expect to continue to learn more theories, which will let us make more precise predictions. And so it might seem like we can’t constrain our eventual power of prediction; maybe we will have powerful enough theories to predict everything exactly.

But that doesn’t seem right either. Our best theories in many areas tell us about fundamental limits on our prediction abilities, and thus limits on how powerful future simple general theories could be. For example:

  • Thermodynamics – We can predict some gross features of future physical states, but the entropy of a system sets a very high (negentropy) cost to learn precise info about the state of that system. If thermodynamics is right, there will never be a general theory to let one predict future states more cheaply than this.
  • Finance – Finance theory has identified many relevant parameters to predict the overall distribution of future assets returns. However, finance theory strongly suggests that it is usually very hard to predict details of the specific future returns of specific assets. The ability to do so would be worth such a huge amount that there just can’t be many who often have such an ability. The cost to gain such an ability must usually be more than the gains from trading it.
  • Cryptography – A well devised code looks random to an untrained eye. As there are a great many possible codes, and a great many ways to find weaknesses in them, it doesn’t seem like there could be any general way to break all codes. Instead code breaking is a matter of knowing lots of specific things about codes and ways they might be broken. People use codes when they expect the cost of breaking them to be prohibitive, and such expectations are usually right.
  • Innovation – Economic theory can predict many features of economies, and of how economies change and grow. And innovation contributes greatly to growth. But economists also strongly expect that the details of particular future innovations cannot be predicted except at a prohibitive cost. Since knowing of innovations ahead of time can often be used for great private profit, and would speed up the introduction of those innovations, it seems that no cheap-to-apply simple general theories can exist which predict the details of most innovations well ahead of time.
  • Ecosystems – We understand some ways in which parameters of ecosystems correlate with their environments. Most of these make sense in terms of general theories of natural selection and genetics. However, most ecologists strongly suspect that the vast majority of the details of particular ecosystems and the species that inhabit them are not easily predictable by simple general theories. Evolution says that many details will be well matched to other details, but to predict them you must know much about the other details to which they match.

In thermodynamics, finance, cryptography, innovations, and ecosystems, we have learned that while there are many useful generalities, the universe is also chock full of important irreducible incompressible detail. As this is true at many levels of abstraction, I would add this entry to the above list:

  • Intelligence – General theories tell us what intelligence means, and how it can generalize across tasks and contexts. But most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours. Ordinary software also gets smart by containing many powerful modules. While the architecture that organizes those modules can make some difference, that difference is mostly small compared to having more better modules. In a world of competing software firms, most ways to improve modules or find new ones cost more than the profits they’d induce.

If most value in intelligence comes from the accumulation of many expensive parts, there may well be no powerful general theories to be discovered to revolutionize future AI, and give an overwhelming advantage to the first project to discover them. Which is the main reason that I’m skeptical about AI foom, the scenario where an initially small project quickly grows to take over the world.

Added 7p: Peter McCluskey has thoughtful commentary here.

GD Star Rating
Tagged as: , , , ,

Does complexity bias biotechnology towards doing damage?

A few months ago I attended the Singularity Summit in Australia. One of the presenters was Randal Koene (videos here), who spoke about technological progress towards whole brain emulation, and some of the impacts this advance would have.

Many enthusiasts – including Robin Hanson on this blog – hope to use mind uploading to extend their own lives. Mind uploading is an alternative to more standard ‘biological’ methods for preventing ageing proposed by others such as Aubrey de Gray of the Methuselah Foundation. Randal believes that proponents of using medicine to extend lives underestimate the difficulty of what they are attempting to do. The reason is that evolution has led to a large number of complex and interconnected molecular pathways which cause our bodies to age and decay. Stopping one pathway won’t extend your life by much, because another will simply cause your death soon after. Controlling contagious diseases extended our lives, but not for very long, because we ran up against cancer and heart disease. Unless some ‘master ageing switch’ turns up, suspending ageing will require discovering, unpacking and intervening in dozens of things that the body does. Throwing out the body, and taking the brain onto a computer, though extremely difficult, might still be the easier option.

This got me thinking about whether biotechnology can be expected to help or hurt us overall. My impression is that the practical impact of biotechnology on our lives has been much less than most enthusiasts expected. I was drawn into a genetics major at university out of enthusiasm for ideas like ‘golden rice’ and ‘designer babies’, but progress towards actually implementing these technologies is remarkably slow. Pulling apart the many kludges evolution has thrown into existing organisms is difficult. Manipulating them to reliably get the change you want, without screwing up something else you need, even more so.

Unfortunately, while making organisms work better is enormously challenging, damaging them is pretty easy. For a human to work, a lot needs to go right. For a human to fail, not much needs to go wrong. As a rule, fiddling with a complex system is a lot more likely to ruin it than improve it. As a result, a simple organism like the influenza virus can totally screw us up, even though killing its host offers it no particular evolutionary advantage:

Few pathogens known to man are as dangerous as the H5N1 avian influenza virus. Of the 600 reported cases of people infected, almost 60 per cent have died. The virus is considered so dangerous in the UK and Canada that research can only be performed in the highest biosafety level laboratory, a so-called BSL-4 lab. If the virus were to become readily transmissible from one person to another (it is readily transmissible between birds but not humans) it could cause a catastrophic global pandemic that would substantially reduce the world’s population.

The 1918 Spanish flu pandemic was caused by a virus that killed less than 2 per cent of its victims, yet went on to kill 50m worldwide. A highly pathogenic H5N1 virus that was as easily transmitted between humans could kill hundreds of millions more.

GD Star Rating
Tagged as: , ,

Sleep Is To Save Energy

Short sleepers, about 1% to 3% of the population, function well on less than 6 hours of sleep without being tired during the day. They tend to be unusually energetic and outgoing. (more)

What fundamental cost do short sleepers pay for their extra wakeful hours? A recent Science article collects an impressive range of evidence (quoted below) to support the theory that the main function of sleep is just to save energy – sleeping brains use a lot less energy, and wakeful human brains use as much as 25% of body energy. People vary in how much sleep they are programmed to need, and if this theory is correct the main risk short sleepers face is that they’ll more easily starve to death in very lean times.

Of course once we were programmed to regularly sleep to save energy, no doubt other biological and mental processes were adapted to take some small advantages from this arrangement. And once those adaptations are in place, it might become expensive for a body to violate those expectations. One person might need sleep because their bodies expect them to sleep a lot, but another body that isn’t programmed to expect as much sleep needn’t pay much of a cost for that, aside from the higher energy cost to run the energy-expensive brain more.

This has dramatic implications for the em future I’ve been exploring. Ems could be selected from among the 1-3% of humans who need less sleep, and we needn’t expect to pay any systematic cost for this in other parameters, other than due to there being only a finite number of humans to pick from. We might even find the global brain parameters that bodies now use to tell brains when they need sleep, and change their settings to turn ems of humans who need a lot of sleep into ems who need a lot less sleep. Average em sleep hours might then plausibly become six hours a night or less.

Those promised quotes: Continue reading "Sleep Is To Save Energy" »

GD Star Rating
Tagged as: , ,

Are Firms Like Trees?

Trees are spectacularly successful, and have been for millions of years. They now cover ~30% of Earth’s land. So trees should be pretty well designed to do what they do. Yet the basic design of trees seems odd in many ways. Might this tell us something interesting about design?

A tree’s basic design problem is how to cheaply hold leaves as high as possible to see the sun, and not be blocked by other trees’ leaves. This leaf support system must be robust to the buffeting of winds and animals. Materials should resist being frozen, burned, and eaten by animals and disease. Oh, and the whole thing must keep functioning as it grows from a tiny seed.

Here are three odd features of tree design:

  1. Irregular-Shaped – Humans often design structures to lift large surface areas up high, and even to have them face the sun. But human designs are usually far more regular than trees. Our buildings and solar cell arrays tend to be regular, and usually rectangular. Trees, in contract, are higgledy-piggledy (see pict above). The regularity of most animal bodies shows that trees could have been regular, with each part in its intended place. Why aren’t tree bodies regular?
  2. Self-Blocking – Human-designed solar cells, and sets of windows that serve a similar function, manage to avoid overly blocking each other. Cell/window elements tend to be arranged along a common surface. In trees, in contrast, leaves often block each other from the sun. Yet animal design again shows that evolution could have put leaves along a regular surface – consider the design of skin or wings. Why aren’t tree leaves on a common surface?
  3. Single-Support – Human structures for lifting things high usually have at least three points of support on the ground. (As do most land animals.) This helps them deal with random weight imbalances and sideways forces like winds. Yet each tree usually only connects to the ground via a single trunk. It didn’t have to be this way. Some fig trees set down more roots when older branches sag down to the ground. And just as people trying to stand on a shifting platform might hold each other’s hands for balance, trees could be designed to have some branches interlock with branches from neighboring trees for support. Why is tree support singular?

Now it is noteworthy that large cities also tend to have weaker forms of these features. Cities are less regular than buildings, buildings often block sunlight to neighboring buildings, and while each building has at least three supports, neighboring buildings rarely attach to each other for balance. What distinguishes cities and trees from buildings?

One key difference is that buildings are made all at once on land that is calm and clear, while cities and trees grow slowly in a changing environment, while competing for resources. Since most small trees never live to be big trees, their choices must focus on current survival and local growth. A tree opportunistically adds its growth in whatever direction seems most open to sun at the moment, with less of a long term growth plan. Since this local growth end up committing the future shape of the tree, local opportunism tends toward an irregular structure.

I’m less clear on explanations for self-blocking and single-support. Sending branches sideways to create new supports might seem to distract from rising higher, but if multiple supports allow a higher peak it isn’t clear why this isn’t worth waiting for. Neighboring tree connections might try to grab more support than they offer, or pull one down when they die. But it isn’t clear why tree connections couldn’t be weak and breakable to deal with such issues, or why trees couldn’t connect preferentially with kin.

Firms also must grow from small seeds, and most small firms never make it to be big firms. Perhaps an analogy with trees could help us understand why successful firms seem irregular and varied in structure, why they are often work at cross-purposes internally, and why merging them with weakly related firms is usually a bad idea.

GD Star Rating
Tagged as: , ,

The History of Inequality

I recently posted on how cities and firms are like distributed as a Zipf power law, with a power of one, where above some threshold each scale holds roughly the same number of people, until the size where the world holds less than one. Turns out, this also holds for nations:

Log Nation Size v Log Rank

The threshold below which there are few nations is roughly three million people. For towns/cities this threshold scale is about three thousand, and for firms it is about three. What were such things distributed like in the past?

I recall that the US today produces few new towns, though centuries ago they formed often. So the threshold scale for towns has risen, probably due to minimum scales needed for efficient town services like electricity, sewers, etc. I’m also pretty sure that early in the farming era lots of folks lived in nations of a million or less. So the threshold scale for nations has also risen.

Before the industrial revolution, there were very few firms of any substantial scale. So during the farming era firms existed but could not have been distributed by Zipf’s law. So if firms had a power law distribution then, it must have had a much steeper power.

If we look all the way back to the forager era, then cities and nations could also not plausibly have had a Zipf distribution — there just were none of any substantial scale. So surely their size distribution also fell off faster than Zipf, as individual income does today.

Looking further back, at biology, the number of individuals per species is distributed nearly log-normally. The number of species per genera:

and the number of individuals with a given family name or ancestor:

have long been distributed via a steeper tail, with number falling as nearly the square of size:

This lower inequality comes because fluctuations in the size of genera and family names are mainly due to uncorrelated fluctuations of their members, rather than to correlated shocks that help or hurt an entire firm, city, or nation together. While this distribution holds less inequality in the short run, still over very long runs it accumulates into vast inequality. For example, most species today descend from a tiny fraction of the species alive hundreds of millions of years ago.

Putting this all together, the number of species per genera and individuals per families has long declined with size as a tail power of two. After the farming revolution, cities and nations could have correlated internal successes and larger feasible sizes, giving a thicker tail of big items. In the industry era, firms could also get very large. Today, nations, cities, and firms are all distributed with a tail power of one, above threshold scales of (three) million, thousand, and one, thresholds that have been rising with time.

My next post will discuss what these historical trends suggest about the future.

GD Star Rating
Tagged as: , , , ,

Trillions At War

The most breathtaking example of colony allegiance in the ant world is that of the Linepithema humile ant. Though native to Argentina, it has spread to many other parts of the world by hitching rides in human cargo. In California the biggest of these “supercolonies” ranges from San Francisco to the Mexican border and may contain a trillion individuals, united throughout by the same “national” identity. Each month millions of Argentine ants die along battlefronts that extend for miles around San Diego, where clashes occur with three other colonies in wars that may have been going on since the species arrived in the state a century ago. The Lanchester square law [of combat] applies with a vengeance in these battles. Cheap, tiny and constantly being replaced by an inexhaustible supply of reinforcements as they fall, Argentine workers reach densities of a few million in the average suburban yard. By vastly outnumbering whatever native species they encounter, the supercolonies control absolute territories, killing every competitor they contact. (more)

Shades of our future, as someday we will hopefully have quadrillions of descendants, and alas they will likely sometimes go to war.

GD Star Rating
Tagged as: ,

Adapt Or Start Over?

Sean Carroll has doubts on nanotech:

Living organisms … can, in a wide variety of circumstances, repair themselves. … Which brings up something that has always worried me about nanotechnology … tiny machines that have been heroically constructed … just seem so darn fragile. … surely one has to worry about the little buggers breaking down. … So what you really want is microscopic machinery that is robust enough to repair itself. Fortunately, this problem has already been solved at least once: it’s called “life.” … This is why my utterly underinformed opinion is that the biggest advances will come not from nanotechnology, but from synthetic biology. (more)

There are four ways to deal with system damage: 1) reliability, 2) redundancy, 3) repair, and 4) replacement. Some designs are less prone to damage; with redundant parts all must fail for a system to fail; sometimes damage can be undone; and the faster a system is replaced the less robust it needs to be. Both artificial and natural systems use all four approaches. Artificial systems often have especially reliable parts, and so rely less on repair. And since they can coordinate better with outside systems, when they do repair they rely more on outside assistance – they have less need for self-repair. So I don’t see artificial systems as failing especially at self-repair.

Nevertheless, Carroll’s basic concern has merit. It can be hard for new approaches to compete with complex tightly integrated approaches that have been adapted over a long time. We humans have succeeded in displacing natural systems with artificial systems in many situations, but in other cases we do better to inherit and adapt natural systems than to try to redesign from scratch. For example, if you hear a song you like, it usually makes more sense to just copy it, and perhaps adapt it to your preferred instruments or style, than to design a whole new song like it.  I’ve argued that we are not up to the task of designing cities from scratch, and that the first human-level artificial intelligences will use better parts but mostly copy structure from biological brains.

So what determines when we can successfully redesign from scratch, and when we are better off copying and adapting existing systems? Redesign makes more sense when we have access to far better parts, and when system designs are relatively simple, making system architecture especially important, especially if we can design better architecture. In contrast, it makes more sense to inherit and adapt existing systems when a few key architectural choices matter less, compared to system “content” (i.e., all the rest). As with songs, cities, and minds. I don’t have a strong opinion about which case applies best for nanotech.

GD Star Rating
Tagged as: , ,

Hail Temple, Buck

Two recent movies, Temple Grandin and Buck, depict the most inspirational real heroes I can recall. Temple Grandin and Buck Brannaman both pioneered ways to improve animal lives, by getting deep enough in animal heads to see how to avoid terrorizing them. Temple deals with cattle, Buck with horses. Terrorizing animals less also helps humans who deal with them.

Some lessons:

1) Neither is a purist. Both accept that animals often suffer, and are slaves of humans. Both work within the current system to make animals lives better, even if the result falls short of their ideals. Compromising with bad is often essential to doing good.

2) Though are similarly insightful, Grandin has a far bigger impact, as her innovations are embodied in physical capital, e.g., the layout of large plants, chosen by large firms. She has revolutionized an industry. In contrast, Brannaman’s innovations are embodied in human capital chosen by small organizations. While Brannaman is personally impressive, it is far from clear how much practice people like him have really changed. Capital intensity does indeed promote innovation.

3) Many doubt that we should feel bad about animal suffering, because they doubt animal minds react like human minds to force, pain, etc. The impressive abilities of Grandin and Brannaman to predict animal behavior by imagining themselves in animal situations supports their claim that cattle and horse fear and suffering is recognizably similar to human fear and suffering. I tentatively accept that such animals are afraid and suffer in similar ways to humans, with similar types of emotions and feelings, even if they cannot think or talk as abstractly about their suffering.

4) The fact that animals are slaves does not imply that animal lives have no value, or that nothing can effect that value. Slavery need not be worse than death, and usually isn’t. A future where the vast majority of our descendants are slaves could still be a glorious future, even if not as glorious as a future where they are not slaves.

GD Star Rating
Tagged as: , , ,