Tag Archives: Biology

Tyler Says Never Ems

There are smart intellectuals out there think economics is all hogwash, and who resent economists continuing on while their concerns have not been adequately addressed. Similarly, people in philosophy of religion and philosophy of mind resent cosmologists and brain scientists continuing on as if one could just model cosmology without a god, or reduce the mind to physical interactions of brain cells. But in my mind such debates have become so stuck that there is little point in waiting until they are resolved; some of us should just get on with assuming particular positions, especially positions that seem so very reasonable, even obvious, and seeing where they lead.

Similarly, I have heard people debate the feasibility of ems for many decades, and such debates have similarly become stuck, making little progress. Instead of getting mired in that debate, I thought it better to explore the consequences of what seems to me the very reasonable positions that ems will eventually be possible. Alas, that mud pit has strong suction. For example, Tyler Cowen:

Do I think Robin Hanson’s “Age of Em” actually will happen? … my answer is…no! .. Don’t get me wrong, I still think it is a stimulating and wonderful book.  And if you don’t believe me, here is The Wall Street Journal:

Mr. Hanson’s book is comprehensive and not put-downable.

But it is best not read as a predictive text, much as Robin might disagree with that assessment.  Why not?  I have three main reasons, all of which are a sort of punting, nonetheless on topics outside one’s areas of expertise deference is very often the correct response.  Here goes:

1. I know a few people who have expertise in neuroscience, and they have never mentioned to me that things might turn out this way (brain scans uploaded into computers to create actual beings and furthermore as the dominant form of civilization).  Maybe they’re just holding back, but I don’t think so.  The neuroscience profession as a whole seems to be unconvinced and for the most part not even pondering this scenario. ..

3. Robin seems to think the age of Em could come about reasonably soon. …  Yet I don’t see any sign of such a radical transformation in market prices. .. There are for instance a variety of 100-year bonds, but Em scenarios do not seem to be a factor in their pricing.

But the author of that Wall Street Journal review, Daniel J. Levitin, is a neuroscientist! You’d think that if his colleagues thought the very idea of ems iffy, he might have mentioned caveats in his review. But no, he worries only about timing:

The only weak point I find in the argument is that it seems to me that if we were as close to emulating human brains as we would need to be for Mr. Hanson’s predictions to come true, you’d think that by now we’d already have emulated ant brains, or Venus fly traps or even tree bark.

Because readers kept asking, in the book I give a concrete estimate of “within roughly a century or so.” But the book really doesn’t depend much on that estimate. What it mainly depends on is ems initiating the next huge disruption on the scale of the farming or industrial revolutions. Also, if the future is important enough to have a hundred books exploring scenarios, it can be worth having books on scenarios with only a 1% chance of happening, and taking those books seriously as real possibilities.

Tyler has spent too much time around media pundits if he thinks he should be hearing a buzz about anything big that might happen in the next few centuries! Should he have expected to hear about cell phones in 1960, or smart phones in 1980, from a typical phone expert then, even without asking directly about such things? Both of these were reasonable foreseen many decades in advance, yet you’d find it hard to see signs of them several decades before they took off in casual conversations with phone experts, or in phone firm stock prices. (Betting markets directly on these topics would have seen them. Alas we still don’t have such things.)

I’m happy to accept neuroscientist expertise, but mainly on in how hard it is to scan brain cells and model them on computers. This isn’t going to come up in casual conversation, but if asked neuroscientists will pretty much all agree that it should eventually be be possible to create computer models of brain cells that capture their key signal processing behavior, i.e., the part that matters for signals received by the rest of the body. They will say it is a matter of when, not if. (Remember, we’ve already done this for the key signal processing behaviors of eyes and ears.)

Many neuroscientists won’t be familiar with computer modeling of brain cell activity, so they won’t have much of an idea of how much computing power is needed. But for those familiar with computer modeling, the key question is: once we understand brain cells well, what are plausible ranges for 1) the number of bits required store the current state of each inactive brain cell, and 2) how many computer processing steps (or gate operations) per second are needed to mimic an active cell’s signal processing.

Once you have those numbers, you’ll need to talk to people familiar with computing cost projections to translate these computing requirements into dates when they can be met cheaply. And then you’d need to talk to economists (like me) to understand how that might influence the economy. You shouldn’t remotely expect typical neuroscientists to have good estimates there. And finally, you’ll have to talk to people who think about other potential big future disruptions to see how plausible it is that ems will be the first big upcoming disruption on the scale of the farming or industrial revolutions.

GD Star Rating
Tagged as: ,

Reply to Jones on Ems

In response to Richard Jones’ book review, I said:

So according to Jones, we can’t trust anthropologists to describe foragers they’ve met, we can’t trust economics when tech changes society, and familiar design principles fail for understanding brains and tiny chemical systems. Apparently only his field, physics, can be trusted well outside current experience. In reply, I say I’d rather rely on experts in each field, relative to his generic skepticism. Brain scientists see familiar design principles as applying to brains, even when designed by evolution, economists see economics as applying to past and distant societies with different tech, and anthropologists think they can understand cultures they visit.

Jones complained on twitter that I “prefer to argue from authority rather than engage with their substance.” I replied “There can’t be much specific response to generic skepticism,” to which he replied, “Well, there’s more than 4000 words of quite technical argument on the mind uploading question in the post I reference.” He’s right that he wrote 4400 words. But let me explain why I see them more as generic skepticism than technical argument.

For context, note that there are whole fields of biological engineering, wherein standard engineering principles are used to understand the engineering of biological systems. These include the design of many specific systems with organisms, such as lungs, blood, muscles, bone, and skin, and also specific subsystems within cells, and also standard behaviors, such as gait rhythms and foraging patterns. Standard design principles are also used to understand why cells are split into different modules that perform distinct functions, instead of having each cell try to contribute to all functions, and why only a few degrees of freedom for each cell matters for that cell’s contribution to its system. Such design principles can also be used to understand why systems are abstract, in the sense of as having only one main type of muscle, for creating forces used for many purposes, one main type of blood system, to move most everything around, or only one main fast signal system, for sending signals of many types.

Our models of the function of many key organs have in fact often enabled us to create functional replacements for them. In addition, we already have good models of, and successful physical emulations of, key parts of the brain’s input and out, such, as input from eyes and ears, and output to arms and legs.

Okay, now here are Jones’ key words:

This separation between the physical and the digital in an integrated circuit isn’t an accident or something pre-ordained – it happens because we’ve designed it to be that way. For those of us who don’t accept the idea of intelligent design in biology, that’s not true for brains. There is no clean “digital abstraction layer” in a brain – why should there be, unless someone designed it that way?

But evolution does design, and its designs do respect standard design principles. Evolution has gained by using both abstraction and modularity. Organs exist. Humans may be better in some ways than evolution at searching large design spaces, but biology definitely designs.

In a brain, for example, the digital is continually remodelling the physical – we see changes in connectivity and changes in synaptic strength as a consequence of the information being processed, changes, that as we see, are the manifestation of substantial physical changes, at the molecular level, in the neurons and synapses.

We have programmable logic devices, such as FPGAs, which can do exactly this.

Underlying all these phenomena are processes of macromolecular shape change in response to a changing local environment. .. This emphasizes that the fundamental unit of biological information processing is not the neuron or the synapse, it’s the molecule.

But you could make that same sort of argument about all organs, such as bones, muscles, lungs, blood, etc., and say we also can’t understand or emulate them without measuring and modeling them them in molecular detail. Similarly for the brain input/output systems that we have already emulated.

Determining the location and connectivity of individual neurons .. is necessary, but far from sufficient condition for specifying the informational state of the brain. .. The molecular basis of biological computation means that it isn’t deterministic, it’s stochastic, it’s random.

Randomness is quite easy to emulate, and most who see ems as possible expect to need brain scans with substantial chemical, in addition to spatial, resolution.

And that’s it, that is Jones’ “technical” critique. Since biological systems are made by evolution human design principles don’t apply, and since they are made of molecules one can’t emulate them without measuring and modeling at the molecular level. Never mind that we have actually seen design principles apply, and emulated while ignoring molecules. That’s what I call “generic skepticism”.

In contrast, I say brains are signal processing systems, and applying standard design principles to such systems tells us:

To manage its intended input-output relation, a signal processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. ..  To emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system. This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded.

GD Star Rating
Tagged as: , ,

Monster Pumps

Yesterday’s Science has a long paper on an exciting new scaling law. For a century we’ve known that larger organisms have lower metabolisms, and thus lower growth rates. Metabolism goes as size to the power of 3/4 over at least twenty orders of magnitude:


So our largest organisms have a per-mass metabolism one hundred thousand times lower than our smallest organisms.

The new finding is that local metabolism also goes as local biomass density to the power of roughly 3/4, over at least three orders of magnitude. This implies that life in dense areas like jungles is just slower and lazier on average than is life in sparse areas like deserts. And this implies that the ratio of predator to prey biomass is smaller in jungles compared to deserts.

When I researched how to cool large em cities I found that our best cooling techs scale quite nicely, and so very big cities need only pay a small premium for cooling compared to small cities. However, I’d been puzzled about why biological organisms seem to pay much higher premiums to be large. This new paper inspired me to dig into the issue.

What I found is that human engineers have figured ways to scale large fluid distribution systems that biology has just never figured out. For example, the hearts that pump blood through animals are periodic pumps, and such pumps have the problem that the pulses they send through the blood stream can reflect back from joints where blood vessels split into smaller vessels. There are ways to design joints to eliminate this, but those solutions create a total volume of blood vessels that doesn’t scale well. Another problem is that blood vessels taking blood to and from the heart are often near enough to each other to leak heat, which can also create a bad scaling problem.

The net result is that big organisms on Earth are just noticeably sluggish compared to small ones. But big organisms don’t have to be sluggish, that is just an accident of the engineering failures of Earth biology. If there is a planet out there where biology has figured out how to efficiently scale its blood vessels, such as by using continuous pumps, the organisms on that planet will have fewer barriers to growing large and active. Efficiently designed large animals on Earth could easily have metabolisms that are thousands of times faster than in existing animals. So, if you don’t already have enough reasons to be scared of alien monsters, consider that they might have far faster metabolisms, and also very large.

This seems yet another reason to think that biology will soon be over. Human culture is inventing so many powerful advances that biology never found, innovations that are far easier to integrate into the human economy than into biological designs. Descendants that integrate well into the human economy will just outcompete biology.

I also spend a little time thinking about how one might explain the dependence of metabolism on biomass density. I found I could explain it by assuming that the more biomass there is in some area, the less energy each biomass gets from the sun. Specifically, I assume that the energy collected from the sun by the biomass in some area has a power law dependence on the biomass in that area. If biomass were very efficiently arranged into thin solar collectors then that power would be one. But since we expect some biomass to block the view of other biomass, a problem that gets worse with more biomass, the power is plausibly less than one. Let’s call a this power that relates biomass density B to energy collected per area E. As in E = cBa.

There are two plausible scenarios for converting energy into new biomass. When the main resource need to make new biomass via metabolism is just energy to create molecules that embody more energy in their arrangement, then M = cBa-1, where M is the rate of production of new biomass relative to old biomass. When new biomass doesn’t need much energy, but it does need thermodynamically reversible machinery to rearrange molecules, then M = cB(a-1)/2. These two scenarios reproduce the observed 3/4 power scaling law when a = 3/4 and 1/2 respectively. When making new biomass requires both simple energy and reversible machinery, the required power a is somewhere between 1/2 and 3/4.

Added 14Sep: On reflection and further study, it seems that biologists just do not have a good theory for the observed 3/4 power. In addition, the power deviates substantially from 3/4 within smaller datasets.

GD Star Rating
Tagged as: , ,

More Whales Please

I was struck by this quote in the paper cited in my last post:

The biosphere considered as a whole has managed to expand the amount of solar energy captured for metabolism to around 5%, limited by the nonuniform presence of key nutrients across the Earth’s surface — primarily fresh water, phosphorus, and nitrogen. Life on Earth is not free-energy-limited because, up until recently, it has not had the intelligence and mega-engineering to distribute Earth’s resources to all of the places solar energy happens to fall, and so it is, in most places, nutrient-limited. (more)

That reminded me of reading earlier this year about how whale poop was once a great nutrient distributor:

A couple of centuries ago, the southern seas were packed with baleen whales. Blue whales, the biggest creatures on Earth, were a hundred times more plentiful than they are today. Biologists couldn’t understand how whales could feed themselves in such an iron-poor environment. And now we may have an answer: Whales are extraordinary recyclers. What whales consume (which is a lot), they give back. (more)

It seems we should save (and expand) the whales because of their huge positive externality on other fish. If humans manage to increase the fraction of solar energy used by life on Earth, it will be primarily because of trade and transport. Transport gives us the ability to move lots of nutrients, and trade gives us the incentives to move them.

GD Star Rating
Tagged as:

Irreducible Detail

Our best theories vary in generality. Some theories are very general, but most are more context specific. Putting all of our best theories together usually doesn’t let us make exact predictions on most variables of interest. We often express this fact formally in our models via “noise,” which represents other factors that we can’t yet predict.

For each of our theories there was a point in time when we didn’t have it yet. Thus we expect to continue to learn more theories, which will let us make more precise predictions. And so it might seem like we can’t constrain our eventual power of prediction; maybe we will have powerful enough theories to predict everything exactly.

But that doesn’t seem right either. Our best theories in many areas tell us about fundamental limits on our prediction abilities, and thus limits on how powerful future simple general theories could be. For example:

  • Thermodynamics – We can predict some gross features of future physical states, but the entropy of a system sets a very high (negentropy) cost to learn precise info about the state of that system. If thermodynamics is right, there will never be a general theory to let one predict future states more cheaply than this.
  • Finance – Finance theory has identified many relevant parameters to predict the overall distribution of future assets returns. However, finance theory strongly suggests that it is usually very hard to predict details of the specific future returns of specific assets. The ability to do so would be worth such a huge amount that there just can’t be many who often have such an ability. The cost to gain such an ability must usually be more than the gains from trading it.
  • Cryptography – A well devised code looks random to an untrained eye. As there are a great many possible codes, and a great many ways to find weaknesses in them, it doesn’t seem like there could be any general way to break all codes. Instead code breaking is a matter of knowing lots of specific things about codes and ways they might be broken. People use codes when they expect the cost of breaking them to be prohibitive, and such expectations are usually right.
  • Innovation – Economic theory can predict many features of economies, and of how economies change and grow. And innovation contributes greatly to growth. But economists also strongly expect that the details of particular future innovations cannot be predicted except at a prohibitive cost. Since knowing of innovations ahead of time can often be used for great private profit, and would speed up the introduction of those innovations, it seems that no cheap-to-apply simple general theories can exist which predict the details of most innovations well ahead of time.
  • Ecosystems – We understand some ways in which parameters of ecosystems correlate with their environments. Most of these make sense in terms of general theories of natural selection and genetics. However, most ecologists strongly suspect that the vast majority of the details of particular ecosystems and the species that inhabit them are not easily predictable by simple general theories. Evolution says that many details will be well matched to other details, but to predict them you must know much about the other details to which they match.

In thermodynamics, finance, cryptography, innovations, and ecosystems, we have learned that while there are many useful generalities, the universe is also chock full of important irreducible incompressible detail. As this is true at many levels of abstraction, I would add this entry to the above list:

  • Intelligence – General theories tell us what intelligence means, and how it can generalize across tasks and contexts. But most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours. Ordinary software also gets smart by containing many powerful modules. While the architecture that organizes those modules can make some difference, that difference is mostly small compared to having more better modules. In a world of competing software firms, most ways to improve modules or find new ones cost more than the profits they’d induce.

If most value in intelligence comes from the accumulation of many expensive parts, there may well be no powerful general theories to be discovered to revolutionize future AI, and give an overwhelming advantage to the first project to discover them. Which is the main reason that I’m skeptical about AI foom, the scenario where an initially small project quickly grows to take over the world.

Added 7p: Peter McCluskey has thoughtful commentary here.

GD Star Rating
Tagged as: , , , ,

Does complexity bias biotechnology towards doing damage?

A few months ago I attended the Singularity Summit in Australia. One of the presenters was Randal Koene (videos here), who spoke about technological progress towards whole brain emulation, and some of the impacts this advance would have.

Many enthusiasts – including Robin Hanson on this blog – hope to use mind uploading to extend their own lives. Mind uploading is an alternative to more standard ‘biological’ methods for preventing ageing proposed by others such as Aubrey de Gray of the Methuselah Foundation. Randal believes that proponents of using medicine to extend lives underestimate the difficulty of what they are attempting to do. The reason is that evolution has led to a large number of complex and interconnected molecular pathways which cause our bodies to age and decay. Stopping one pathway won’t extend your life by much, because another will simply cause your death soon after. Controlling contagious diseases extended our lives, but not for very long, because we ran up against cancer and heart disease. Unless some ‘master ageing switch’ turns up, suspending ageing will require discovering, unpacking and intervening in dozens of things that the body does. Throwing out the body, and taking the brain onto a computer, though extremely difficult, might still be the easier option.

This got me thinking about whether biotechnology can be expected to help or hurt us overall. My impression is that the practical impact of biotechnology on our lives has been much less than most enthusiasts expected. I was drawn into a genetics major at university out of enthusiasm for ideas like ‘golden rice’ and ‘designer babies’, but progress towards actually implementing these technologies is remarkably slow. Pulling apart the many kludges evolution has thrown into existing organisms is difficult. Manipulating them to reliably get the change you want, without screwing up something else you need, even more so.

Unfortunately, while making organisms work better is enormously challenging, damaging them is pretty easy. For a human to work, a lot needs to go right. For a human to fail, not much needs to go wrong. As a rule, fiddling with a complex system is a lot more likely to ruin it than improve it. As a result, a simple organism like the influenza virus can totally screw us up, even though killing its host offers it no particular evolutionary advantage:

Few pathogens known to man are as dangerous as the H5N1 avian influenza virus. Of the 600 reported cases of people infected, almost 60 per cent have died. The virus is considered so dangerous in the UK and Canada that research can only be performed in the highest biosafety level laboratory, a so-called BSL-4 lab. If the virus were to become readily transmissible from one person to another (it is readily transmissible between birds but not humans) it could cause a catastrophic global pandemic that would substantially reduce the world’s population.

The 1918 Spanish flu pandemic was caused by a virus that killed less than 2 per cent of its victims, yet went on to kill 50m worldwide. A highly pathogenic H5N1 virus that was as easily transmitted between humans could kill hundreds of millions more.

GD Star Rating
Tagged as: , ,

Sleep Is To Save Energy

Short sleepers, about 1% to 3% of the population, function well on less than 6 hours of sleep without being tired during the day. They tend to be unusually energetic and outgoing. (more)

What fundamental cost do short sleepers pay for their extra wakeful hours? A recent Science article collects an impressive range of evidence (quoted below) to support the theory that the main function of sleep is just to save energy – sleeping brains use a lot less energy, and wakeful human brains use as much as 25% of body energy. People vary in how much sleep they are programmed to need, and if this theory is correct the main risk short sleepers face is that they’ll more easily starve to death in very lean times.

Of course once we were programmed to regularly sleep to save energy, no doubt other biological and mental processes were adapted to take some small advantages from this arrangement. And once those adaptations are in place, it might become expensive for a body to violate those expectations. One person might need sleep because their bodies expect them to sleep a lot, but another body that isn’t programmed to expect as much sleep needn’t pay much of a cost for that, aside from the higher energy cost to run the energy-expensive brain more.

This has dramatic implications for the em future I’ve been exploring. Ems could be selected from among the 1-3% of humans who need less sleep, and we needn’t expect to pay any systematic cost for this in other parameters, other than due to there being only a finite number of humans to pick from. We might even find the global brain parameters that bodies now use to tell brains when they need sleep, and change their settings to turn ems of humans who need a lot of sleep into ems who need a lot less sleep. Average em sleep hours might then plausibly become six hours a night or less.

Those promised quotes: Continue reading "Sleep Is To Save Energy" »

GD Star Rating
Tagged as: , ,

Are Firms Like Trees?

Trees are spectacularly successful, and have been for millions of years. They now cover ~30% of Earth’s land. So trees should be pretty well designed to do what they do. Yet the basic design of trees seems odd in many ways. Might this tell us something interesting about design?

A tree’s basic design problem is how to cheaply hold leaves as high as possible to see the sun, and not be blocked by other trees’ leaves. This leaf support system must be robust to the buffeting of winds and animals. Materials should resist being frozen, burned, and eaten by animals and disease. Oh, and the whole thing must keep functioning as it grows from a tiny seed.

Here are three odd features of tree design:

  1. Irregular-Shaped – Humans often design structures to lift large surface areas up high, and even to have them face the sun. But human designs are usually far more regular than trees. Our buildings and solar cell arrays tend to be regular, and usually rectangular. Trees, in contract, are higgledy-piggledy (see pict above). The regularity of most animal bodies shows that trees could have been regular, with each part in its intended place. Why aren’t tree bodies regular?
  2. Self-Blocking – Human-designed solar cells, and sets of windows that serve a similar function, manage to avoid overly blocking each other. Cell/window elements tend to be arranged along a common surface. In trees, in contrast, leaves often block each other from the sun. Yet animal design again shows that evolution could have put leaves along a regular surface – consider the design of skin or wings. Why aren’t tree leaves on a common surface?
  3. Single-Support – Human structures for lifting things high usually have at least three points of support on the ground. (As do most land animals.) This helps them deal with random weight imbalances and sideways forces like winds. Yet each tree usually only connects to the ground via a single trunk. It didn’t have to be this way. Some fig trees set down more roots when older branches sag down to the ground. And just as people trying to stand on a shifting platform might hold each other’s hands for balance, trees could be designed to have some branches interlock with branches from neighboring trees for support. Why is tree support singular?

Now it is noteworthy that large cities also tend to have weaker forms of these features. Cities are less regular than buildings, buildings often block sunlight to neighboring buildings, and while each building has at least three supports, neighboring buildings rarely attach to each other for balance. What distinguishes cities and trees from buildings?

One key difference is that buildings are made all at once on land that is calm and clear, while cities and trees grow slowly in a changing environment, while competing for resources. Since most small trees never live to be big trees, their choices must focus on current survival and local growth. A tree opportunistically adds its growth in whatever direction seems most open to sun at the moment, with less of a long term growth plan. Since this local growth end up committing the future shape of the tree, local opportunism tends toward an irregular structure.

I’m less clear on explanations for self-blocking and single-support. Sending branches sideways to create new supports might seem to distract from rising higher, but if multiple supports allow a higher peak it isn’t clear why this isn’t worth waiting for. Neighboring tree connections might try to grab more support than they offer, or pull one down when they die. But it isn’t clear why tree connections couldn’t be weak and breakable to deal with such issues, or why trees couldn’t connect preferentially with kin.

Firms also must grow from small seeds, and most small firms never make it to be big firms. Perhaps an analogy with trees could help us understand why successful firms seem irregular and varied in structure, why they are often work at cross-purposes internally, and why merging them with weakly related firms is usually a bad idea.

GD Star Rating
Tagged as: , ,

The History of Inequality

I recently posted on how cities and firms are like distributed as a Zipf power law, with a power of one, where above some threshold each scale holds roughly the same number of people, until the size where the world holds less than one. Turns out, this also holds for nations:

Log Nation Size v Log Rank

The threshold below which there are few nations is roughly three million people. For towns/cities this threshold scale is about three thousand, and for firms it is about three. What were such things distributed like in the past?

I recall that the US today produces few new towns, though centuries ago they formed often. So the threshold scale for towns has risen, probably due to minimum scales needed for efficient town services like electricity, sewers, etc. I’m also pretty sure that early in the farming era lots of folks lived in nations of a million or less. So the threshold scale for nations has also risen.

Before the industrial revolution, there were very few firms of any substantial scale. So during the farming era firms existed but could not have been distributed by Zipf’s law. So if firms had a power law distribution then, it must have had a much steeper power.

If we look all the way back to the forager era, then cities and nations could also not plausibly have had a Zipf distribution — there just were none of any substantial scale. So surely their size distribution also fell off faster than Zipf, as individual income does today.

Looking further back, at biology, the number of individuals per species is distributed nearly log-normally. The number of species per genera:

and the number of individuals with a given family name or ancestor:

have long been distributed via a steeper tail, with number falling as nearly the square of size:

This lower inequality comes because fluctuations in the size of genera and family names are mainly due to uncorrelated fluctuations of their members, rather than to correlated shocks that help or hurt an entire firm, city, or nation together. While this distribution holds less inequality in the short run, still over very long runs it accumulates into vast inequality. For example, most species today descend from a tiny fraction of the species alive hundreds of millions of years ago.

Putting this all together, the number of species per genera and individuals per families has long declined with size as a tail power of two. After the farming revolution, cities and nations could have correlated internal successes and larger feasible sizes, giving a thicker tail of big items. In the industry era, firms could also get very large. Today, nations, cities, and firms are all distributed with a tail power of one, above threshold scales of (three) million, thousand, and one, thresholds that have been rising with time.

My next post will discuss what these historical trends suggest about the future.

GD Star Rating
Tagged as: , , , ,

Trillions At War

The most breathtaking example of colony allegiance in the ant world is that of the Linepithema humile ant. Though native to Argentina, it has spread to many other parts of the world by hitching rides in human cargo. In California the biggest of these “supercolonies” ranges from San Francisco to the Mexican border and may contain a trillion individuals, united throughout by the same “national” identity. Each month millions of Argentine ants die along battlefronts that extend for miles around San Diego, where clashes occur with three other colonies in wars that may have been going on since the species arrived in the state a century ago. The Lanchester square law [of combat] applies with a vengeance in these battles. Cheap, tiny and constantly being replaced by an inexhaustible supply of reinforcements as they fall, Argentine workers reach densities of a few million in the average suburban yard. By vastly outnumbering whatever native species they encounter, the supercolonies control absolute territories, killing every competitor they contact. (more)

Shades of our future, as someday we will hopefully have quadrillions of descendants, and alas they will likely sometimes go to war.

GD Star Rating
Tagged as: ,