Tag Archives: Biology

More Whales Please

I was struck by this quote in the paper cited in my last post:

The biosphere considered as a whole has managed to expand the amount of solar energy captured for metabolism to around 5%, limited by the nonuniform presence of key nutrients across the Earth’s surface — primarily fresh water, phosphorus, and nitrogen. Life on Earth is not free-energy-limited because, up until recently, it has not had the intelligence and mega-engineering to distribute Earth’s resources to all of the places solar energy happens to fall, and so it is, in most places, nutrient-limited. (more)

That reminded me of reading earlier this year about how whale poop was once a great nutrient distributor:

A couple of centuries ago, the southern seas were packed with baleen whales. Blue whales, the biggest creatures on Earth, were a hundred times more plentiful than they are today. Biologists couldn’t understand how whales could feed themselves in such an iron-poor environment. And now we may have an answer: Whales are extraordinary recyclers. What whales consume (which is a lot), they give back. (more)

It seems we should save (and expand) the whales because of their huge positive externality on other fish. If humans manage to increase the fraction of solar energy used by life on Earth, it will be primarily because of trade and transport. Transport gives us the ability to move lots of nutrients, and trade gives us the incentives to move them.

GD Star Rating
loading...
Tagged as:

Irreducible Detail

Our best theories vary in generality. Some theories are very general, but most are more context specific. Putting all of our best theories together usually doesn’t let us make exact predictions on most variables of interest. We often express this fact formally in our models via “noise,” which represents other factors that we can’t yet predict.

For each of our theories there was a point in time when we didn’t have it yet. Thus we expect to continue to learn more theories, which will let us make more precise predictions. And so it might seem like we can’t constrain our eventual power of prediction; maybe we will have powerful enough theories to predict everything exactly.

But that doesn’t seem right either. Our best theories in many areas tell us about fundamental limits on our prediction abilities, and thus limits on how powerful future simple general theories could be. For example:

  • Thermodynamics – We can predict some gross features of future physical states, but the entropy of a system sets a very high (negentropy) cost to learn precise info about the state of that system. If thermodynamics is right, there will never be a general theory to let one predict future states more cheaply than this.
  • Finance – Finance theory has identified many relevant parameters to predict the overall distribution of future assets returns. However, finance theory strongly suggests that it is usually very hard to predict details of the specific future returns of specific assets. The ability to do so would be worth such a huge amount that there just can’t be many who often have such an ability. The cost to gain such an ability must usually be more than the gains from trading it.
  • Cryptography – A well devised code looks random to an untrained eye. As there are a great many possible codes, and a great many ways to find weaknesses in them, it doesn’t seem like there could be any general way to break all codes. Instead code breaking is a matter of knowing lots of specific things about codes and ways they might be broken. People use codes when they expect the cost of breaking them to be prohibitive, and such expectations are usually right.
  • Innovation – Economic theory can predict many features of economies, and of how economies change and grow. And innovation contributes greatly to growth. But economists also strongly expect that the details of particular future innovations cannot be predicted except at a prohibitive cost. Since knowing of innovations ahead of time can often be used for great private profit, and would speed up the introduction of those innovations, it seems that no cheap-to-apply simple general theories can exist which predict the details of most innovations well ahead of time.
  • Ecosystems – We understand some ways in which parameters of ecosystems correlate with their environments. Most of these make sense in terms of general theories of natural selection and genetics. However, most ecologists strongly suspect that the vast majority of the details of particular ecosystems and the species that inhabit them are not easily predictable by simple general theories. Evolution says that many details will be well matched to other details, but to predict them you must know much about the other details to which they match.

In thermodynamics, finance, cryptography, innovations, and ecosystems, we have learned that while there are many useful generalities, the universe is also chock full of important irreducible incompressible detail. As this is true at many levels of abstraction, I would add this entry to the above list:

  • Intelligence – General theories tell us what intelligence means, and how it can generalize across tasks and contexts. But most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours. Ordinary software also gets smart by containing many powerful modules. While the architecture that organizes those modules can make some difference, that difference is mostly small compared to having more better modules. In a world of competing software firms, most ways to improve modules or find new ones cost more than the profits they’d induce.

If most value in intelligence comes from the accumulation of many expensive parts, there may well be no powerful general theories to be discovered to revolutionize future AI, and give an overwhelming advantage to the first project to discover them. Which is the main reason that I’m skeptical about AI foom, the scenario where an initially small project quickly grows to take over the world.

Added 7p: Peter McCluskey has thoughtful commentary here.

GD Star Rating
loading...
Tagged as: , , , ,

Does complexity bias biotechnology towards doing damage?

A few months ago I attended the Singularity Summit in Australia. One of the presenters was Randal Koene (videos here), who spoke about technological progress towards whole brain emulation, and some of the impacts this advance would have.

Many enthusiasts – including Robin Hanson on this blog – hope to use mind uploading to extend their own lives. Mind uploading is an alternative to more standard ‘biological’ methods for preventing ageing proposed by others such as Aubrey de Gray of the Methuselah Foundation. Randal believes that proponents of using medicine to extend lives underestimate the difficulty of what they are attempting to do. The reason is that evolution has led to a large number of complex and interconnected molecular pathways which cause our bodies to age and decay. Stopping one pathway won’t extend your life by much, because another will simply cause your death soon after. Controlling contagious diseases extended our lives, but not for very long, because we ran up against cancer and heart disease. Unless some ‘master ageing switch’ turns up, suspending ageing will require discovering, unpacking and intervening in dozens of things that the body does. Throwing out the body, and taking the brain onto a computer, though extremely difficult, might still be the easier option.

This got me thinking about whether biotechnology can be expected to help or hurt us overall. My impression is that the practical impact of biotechnology on our lives has been much less than most enthusiasts expected. I was drawn into a genetics major at university out of enthusiasm for ideas like ‘golden rice’ and ‘designer babies’, but progress towards actually implementing these technologies is remarkably slow. Pulling apart the many kludges evolution has thrown into existing organisms is difficult. Manipulating them to reliably get the change you want, without screwing up something else you need, even more so.

Unfortunately, while making organisms work better is enormously challenging, damaging them is pretty easy. For a human to work, a lot needs to go right. For a human to fail, not much needs to go wrong. As a rule, fiddling with a complex system is a lot more likely to ruin it than improve it. As a result, a simple organism like the influenza virus can totally screw us up, even though killing its host offers it no particular evolutionary advantage:

Few pathogens known to man are as dangerous as the H5N1 avian influenza virus. Of the 600 reported cases of people infected, almost 60 per cent have died. The virus is considered so dangerous in the UK and Canada that research can only be performed in the highest biosafety level laboratory, a so-called BSL-4 lab. If the virus were to become readily transmissible from one person to another (it is readily transmissible between birds but not humans) it could cause a catastrophic global pandemic that would substantially reduce the world’s population.

The 1918 Spanish flu pandemic was caused by a virus that killed less than 2 per cent of its victims, yet went on to kill 50m worldwide. A highly pathogenic H5N1 virus that was as easily transmitted between humans could kill hundreds of millions more.

GD Star Rating
loading...
Tagged as: , ,

Sleep Is To Save Energy

Short sleepers, about 1% to 3% of the population, function well on less than 6 hours of sleep without being tired during the day. They tend to be unusually energetic and outgoing. (more)

What fundamental cost do short sleepers pay for their extra wakeful hours? A recent Science article collects an impressive range of evidence (quoted below) to support the theory that the main function of sleep is just to save energy – sleeping brains use a lot less energy, and wakeful human brains use as much as 25% of body energy. People vary in how much sleep they are programmed to need, and if this theory is correct the main risk short sleepers face is that they’ll more easily starve to death in very lean times.

Of course once we were programmed to regularly sleep to save energy, no doubt other biological and mental processes were adapted to take some small advantages from this arrangement. And once those adaptations are in place, it might become expensive for a body to violate those expectations. One person might need sleep because their bodies expect them to sleep a lot, but another body that isn’t programmed to expect as much sleep needn’t pay much of a cost for that, aside from the higher energy cost to run the energy-expensive brain more.

This has dramatic implications for the em future I’ve been exploring. Ems could be selected from among the 1-3% of humans who need less sleep, and we needn’t expect to pay any systematic cost for this in other parameters, other than due to there being only a finite number of humans to pick from. We might even find the global brain parameters that bodies now use to tell brains when they need sleep, and change their settings to turn ems of humans who need a lot of sleep into ems who need a lot less sleep. Average em sleep hours might then plausibly become six hours a night or less.

Those promised quotes: Continue reading "Sleep Is To Save Energy" »

GD Star Rating
loading...
Tagged as: , ,

Are Firms Like Trees?

Trees are spectacularly successful, and have been for millions of years. They now cover ~30% of Earth’s land. So trees should be pretty well designed to do what they do. Yet the basic design of trees seems odd in many ways. Might this tell us something interesting about design?

A tree’s basic design problem is how to cheaply hold leaves as high as possible to see the sun, and not be blocked by other trees’ leaves. This leaf support system must be robust to the buffeting of winds and animals. Materials should resist being frozen, burned, and eaten by animals and disease. Oh, and the whole thing must keep functioning as it grows from a tiny seed.

Here are three odd features of tree design:

  1. Irregular-Shaped – Humans often design structures to lift large surface areas up high, and even to have them face the sun. But human designs are usually far more regular than trees. Our buildings and solar cell arrays tend to be regular, and usually rectangular. Trees, in contract, are higgledy-piggledy (see pict above). The regularity of most animal bodies shows that trees could have been regular, with each part in its intended place. Why aren’t tree bodies regular?
  2. Self-Blocking – Human-designed solar cells, and sets of windows that serve a similar function, manage to avoid overly blocking each other. Cell/window elements tend to be arranged along a common surface. In trees, in contrast, leaves often block each other from the sun. Yet animal design again shows that evolution could have put leaves along a regular surface – consider the design of skin or wings. Why aren’t tree leaves on a common surface?
  3. Single-Support – Human structures for lifting things high usually have at least three points of support on the ground. (As do most land animals.) This helps them deal with random weight imbalances and sideways forces like winds. Yet each tree usually only connects to the ground via a single trunk. It didn’t have to be this way. Some fig trees set down more roots when older branches sag down to the ground. And just as people trying to stand on a shifting platform might hold each other’s hands for balance, trees could be designed to have some branches interlock with branches from neighboring trees for support. Why is tree support singular?

Now it is noteworthy that large cities also tend to have weaker forms of these features. Cities are less regular than buildings, buildings often block sunlight to neighboring buildings, and while each building has at least three supports, neighboring buildings rarely attach to each other for balance. What distinguishes cities and trees from buildings?

One key difference is that buildings are made all at once on land that is calm and clear, while cities and trees grow slowly in a changing environment, while competing for resources. Since most small trees never live to be big trees, their choices must focus on current survival and local growth. A tree opportunistically adds its growth in whatever direction seems most open to sun at the moment, with less of a long term growth plan. Since this local growth end up committing the future shape of the tree, local opportunism tends toward an irregular structure.

I’m less clear on explanations for self-blocking and single-support. Sending branches sideways to create new supports might seem to distract from rising higher, but if multiple supports allow a higher peak it isn’t clear why this isn’t worth waiting for. Neighboring tree connections might try to grab more support than they offer, or pull one down when they die. But it isn’t clear why tree connections couldn’t be weak and breakable to deal with such issues, or why trees couldn’t connect preferentially with kin.

Firms also must grow from small seeds, and most small firms never make it to be big firms. Perhaps an analogy with trees could help us understand why successful firms seem irregular and varied in structure, why they are often work at cross-purposes internally, and why merging them with weakly related firms is usually a bad idea.

GD Star Rating
loading...
Tagged as: , ,

The History of Inequality

I recently posted on how cities and firms are like distributed as a Zipf power law, with a power of one, where above some threshold each scale holds roughly the same number of people, until the size where the world holds less than one. Turns out, this also holds for nations:

Log Nation Size v Log Rank

The threshold below which there are few nations is roughly three million people. For towns/cities this threshold scale is about three thousand, and for firms it is about three. What were such things distributed like in the past?

I recall that the US today produces few new towns, though centuries ago they formed often. So the threshold scale for towns has risen, probably due to minimum scales needed for efficient town services like electricity, sewers, etc. I’m also pretty sure that early in the farming era lots of folks lived in nations of a million or less. So the threshold scale for nations has also risen.

Before the industrial revolution, there were very few firms of any substantial scale. So during the farming era firms existed but could not have been distributed by Zipf’s law. So if firms had a power law distribution then, it must have had a much steeper power.

If we look all the way back to the forager era, then cities and nations could also not plausibly have had a Zipf distribution — there just were none of any substantial scale. So surely their size distribution also fell off faster than Zipf, as individual income does today.

Looking further back, at biology, the number of individuals per species is distributed nearly log-normally. The number of species per genera:

and the number of individuals with a given family name or ancestor:

have long been distributed via a steeper tail, with number falling as nearly the square of size:

This lower inequality comes because fluctuations in the size of genera and family names are mainly due to uncorrelated fluctuations of their members, rather than to correlated shocks that help or hurt an entire firm, city, or nation together. While this distribution holds less inequality in the short run, still over very long runs it accumulates into vast inequality. For example, most species today descend from a tiny fraction of the species alive hundreds of millions of years ago.

Putting this all together, the number of species per genera and individuals per families has long declined with size as a tail power of two. After the farming revolution, cities and nations could have correlated internal successes and larger feasible sizes, giving a thicker tail of big items. In the industry era, firms could also get very large. Today, nations, cities, and firms are all distributed with a tail power of one, above threshold scales of (three) million, thousand, and one, thresholds that have been rising with time.

My next post will discuss what these historical trends suggest about the future.

GD Star Rating
loading...
Tagged as: , , , ,

Trillions At War

The most breathtaking example of colony allegiance in the ant world is that of the Linepithema humile ant. Though native to Argentina, it has spread to many other parts of the world by hitching rides in human cargo. In California the biggest of these “supercolonies” ranges from San Francisco to the Mexican border and may contain a trillion individuals, united throughout by the same “national” identity. Each month millions of Argentine ants die along battlefronts that extend for miles around San Diego, where clashes occur with three other colonies in wars that may have been going on since the species arrived in the state a century ago. The Lanchester square law [of combat] applies with a vengeance in these battles. Cheap, tiny and constantly being replaced by an inexhaustible supply of reinforcements as they fall, Argentine workers reach densities of a few million in the average suburban yard. By vastly outnumbering whatever native species they encounter, the supercolonies control absolute territories, killing every competitor they contact. (more)

Shades of our future, as someday we will hopefully have quadrillions of descendants, and alas they will likely sometimes go to war.

GD Star Rating
loading...
Tagged as: ,

Adapt Or Start Over?

Sean Carroll has doubts on nanotech:

Living organisms … can, in a wide variety of circumstances, repair themselves. … Which brings up something that has always worried me about nanotechnology … tiny machines that have been heroically constructed … just seem so darn fragile. … surely one has to worry about the little buggers breaking down. … So what you really want is microscopic machinery that is robust enough to repair itself. Fortunately, this problem has already been solved at least once: it’s called “life.” … This is why my utterly underinformed opinion is that the biggest advances will come not from nanotechnology, but from synthetic biology. (more)

There are four ways to deal with system damage: 1) reliability, 2) redundancy, 3) repair, and 4) replacement. Some designs are less prone to damage; with redundant parts all must fail for a system to fail; sometimes damage can be undone; and the faster a system is replaced the less robust it needs to be. Both artificial and natural systems use all four approaches. Artificial systems often have especially reliable parts, and so rely less on repair. And since they can coordinate better with outside systems, when they do repair they rely more on outside assistance – they have less need for self-repair. So I don’t see artificial systems as failing especially at self-repair.

Nevertheless, Carroll’s basic concern has merit. It can be hard for new approaches to compete with complex tightly integrated approaches that have been adapted over a long time. We humans have succeeded in displacing natural systems with artificial systems in many situations, but in other cases we do better to inherit and adapt natural systems than to try to redesign from scratch. For example, if you hear a song you like, it usually makes more sense to just copy it, and perhaps adapt it to your preferred instruments or style, than to design a whole new song like it.  I’ve argued that we are not up to the task of designing cities from scratch, and that the first human-level artificial intelligences will use better parts but mostly copy structure from biological brains.

So what determines when we can successfully redesign from scratch, and when we are better off copying and adapting existing systems? Redesign makes more sense when we have access to far better parts, and when system designs are relatively simple, making system architecture especially important, especially if we can design better architecture. In contrast, it makes more sense to inherit and adapt existing systems when a few key architectural choices matter less, compared to system “content” (i.e., all the rest). As with songs, cities, and minds. I don’t have a strong opinion about which case applies best for nanotech.

GD Star Rating
loading...
Tagged as: , ,

Hail Temple, Buck

Two recent movies, Temple Grandin and Buck, depict the most inspirational real heroes I can recall. Temple Grandin and Buck Brannaman both pioneered ways to improve animal lives, by getting deep enough in animal heads to see how to avoid terrorizing them. Temple deals with cattle, Buck with horses. Terrorizing animals less also helps humans who deal with them.

Some lessons:

1) Neither is a purist. Both accept that animals often suffer, and are slaves of humans. Both work within the current system to make animals lives better, even if the result falls short of their ideals. Compromising with bad is often essential to doing good.

2) Though are similarly insightful, Grandin has a far bigger impact, as her innovations are embodied in physical capital, e.g., the layout of large plants, chosen by large firms. She has revolutionized an industry. In contrast, Brannaman’s innovations are embodied in human capital chosen by small organizations. While Brannaman is personally impressive, it is far from clear how much practice people like him have really changed. Capital intensity does indeed promote innovation.

3) Many doubt that we should feel bad about animal suffering, because they doubt animal minds react like human minds to force, pain, etc. The impressive abilities of Grandin and Brannaman to predict animal behavior by imagining themselves in animal situations supports their claim that cattle and horse fear and suffering is recognizably similar to human fear and suffering. I tentatively accept that such animals are afraid and suffer in similar ways to humans, with similar types of emotions and feelings, even if they cannot think or talk as abstractly about their suffering.

4) The fact that animals are slaves does not imply that animal lives have no value, or that nothing can effect that value. Slavery need not be worse than death, and usually isn’t. A future where the vast majority of our descendants are slaves could still be a glorious future, even if not as glorious as a future where they are not slaves.

GD Star Rating
loading...
Tagged as: , , ,

Signal Mappers Decouple

Andrew Sullivan notes that Tim Lee argues that ems (whole brain emulations) just won’t work:

There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson … fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems. … Digital computers … were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. … Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. … We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate. … Each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. (more; Eli Dourado agrees; Alex Waller disagrees.)

Human brains were not designed by humans, but they were designed. Evolution has imposed huge selection pressures on brains over millions of years, to perform very particular functions. Yes, humans use more math that does natural selection to assist them. But we should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor. The weather is not a designed signal processor, so it does not achieve such decoupling. Let me explain.

A signal processor is designed to mantain some intended relation between particular inputs and outputs. All known signal processors are physical systems with vastly more degrees of freedom than are contained in the relevant inputs they seek to receive, the outputs they seek to send, or the sorts of dependencies between input and outputs they seek to maintain. So in order manage its intended input-output relation, a single processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. Really, just ask most any signal-process hardware engineer.

Now sometimes random inputs can be useful in certain signal processing strategies, and this can be implemented by coupling certain parts of the system to most any random degrees of freedom. So signal processors don’t always want to minimize extra couplings. But this is a rare exception to the general need to decouple.

The bottom line is that to emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system.

This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded. Artificial ears and eyes do not require a detailed weather-forecasting-like simulation of the vast complex physical systems that are our ears and eyes. Yes, such artificial organs do not exactly reproduce the input-output relations of their biological counterparts. I expect someone with one artificial ear and one real ear could tell the difference. But the reproduction is close enough to allow the artificial versions to perform most of the same practical functions.

We are confident that the number of relevant signal dimensions in a human brain is vastly smaller than its physical degrees of freedom. But we do not know just how many are those dimensions. The more dimensions, the harder it will be to emulate them. But the fact that human brains continue to function with nearly the same effectiveness when they are whacked on the side of the head, or when flooded with various odd chemicals, shows they have been designed to decouple from most other physical brain dimensions.

The brain still functions reasonably well even flooded with chemicals specifically designed to interfere with neurotransmitters, the key chemicals by which neurons send signals to each other! Yes people on “drugs” don’t function exactly the same, but with moderate drug levels people can still perform most of the functions required for most jobs.

Remember, my main claim is that whole brain emulation will let machines substitue for humans through the vast majority of the world economy. The equivalent of human brains on mild drugs should be plenty sufficient for this purpose – we don’t need exact replicas.

Added 7p: Tim Lee responds:

Hanson seems to be making a different claim here than he made in his EconTalk interview. There his claim seemed to be that we didn’t need to understand how the brain works in any detail because we could simply scan a brain’s neurons and “port” them to a silicon substrate. Here, in contrast, he’s suggesting that we determine the brain’s “key internal signal dimensions and their internal mappings” and then build a digital system that replicates these higher-level functions. Which is to say we do need to understand how the brain works in some detail before we can duplicate it computationally. …

Biologists know a ton about proteins. … Yet despite all our knowledge, … general protein folding is believed to be computationally intractible. … My point is that even detailed micro-level knowledge of a system doesn’t necessarily give us the capacity to efficiently predict its macro-level behavior. … By the same token, even if we had a pristine brain scan and a detailed understanding of the micro-level properties of neurons, there’s no good reason to think that simulating the behavior of 100 billion neurons will ever be computationally tractable.

My claim is that, in order to create economically-sufficient substitutes for human workers, we don’t need to understand how the brain works beyond having decent models of each cell type as a signal processor. Like the weather, protein folding is not designed to process signals and so does not have the decoupling feature I describe above. Brain cells are designed to process signals in the brain, and so should have a much simplified description in signal processing terms. We already have pretty good signal-processing models of some cell types; we just need to do the same for all the other cell types.

GD Star Rating
loading...
Tagged as: , , , ,