Tag Archives: Biology

Men Are Animals

I spent much of the (so far) middle of my life pursuing big/deep questions. That’s what I focused on in each subject I learned, that’s what I liked most about the first topic (physics) I specialized in, and that’s what drew me to each new subject as I switched.

It was good that I was able to stop jumping to new subjects, so that I could actually accomplish things. However, as I long avoided studying biology (too much disorganized detail!), but recently found an excuse to focus there, I’ve been enjoying my old pleasure via deep questions of biology. For example, I’ve long heard talk on the puzzle of why sex exists, and have heard many plausible arguments on why sex (mostly) won over asexual reproduction. But until a few days ago I hadn’t noticed a harder puzzle: why do most animals, but not most plants, have males?

Most plants are hermaphrodites; each organism has both male and female genitalia. Plants thus gain the many advantages of recombining genes from multiple parents, while also ensuring that each organism contributes fully to reproducing the species. Most animals, in contrast, reproduce via pairing one male and one female, with females investing more than males in children. In such species, males and females differ in many ways that can be understood as resulting from these differing investments in children.

Many of these differences seem to be costly for the species. For example, not only do males spend most of their resources competing with each other for access to females instead helping with children, their competition often directly harms females and children. In fact, species where males differ more from females go extinct more quickly:

When comparing species, it emerged that those in which males were very different from females had a poorer prognosis for continued existence. The authors’ models predict a tenfold increase in extinction risk per unit time when species in which males are larger than females, with large differences in shape between the sexes, are compared with species in which the males are smaller than the females, with small differences in shape between the sexes. (more)

And yet males exist, at least in most animal species. Why? More monogamous species, like humans, where fathers invest more in kids, are less of a puzzle, but they remain a puzzle as long as males invest less. As plants show that an all-hermaphrodite equilibrium can robustly last long for complex species, there must be some big positive advantage to having males in animal, but not plant, species.

After reading a dozen or so papers, I can report: we just have no idea what that advantage is. One person suggests males are “an ‘experimental’ part of the species that allows the species to expand their ecological niche, and to have alternative configurations.” But this idea doesn’t seem to have been developed very far, and why wouldn’t this work just as well for plants?

The robust existence of animal males strongly suggests that we men have an important but-as-yet-unknown mission. We offer a gain that more than pays for our many costs, at least in most animals species. And yet our costs seem much clearer than our gains. We men might feel a bit better about our place in the world if we could better understood our positive contributions. And yet very few people study this deep question, even as vast numbers remain very engaged discussing human gender politics. That seems a shame to me.

Added 9:30p: Plants do compete for and select mates. It isn’t obvious that mobility allows far more such competition.

Added 4a: You might have seen evolutionary competition as overly destructive, but existing because more cooperation requires more coordination, which is hard. But the existence of males shows that, at least for animals, evolution saw “red in tooth and claw” competition between hermaphrodites as insufficient. So evolution created and maintains an even stronger kind of competition, between males who need invest less in children and can thus invest even more in competition.

GD Star Rating
loading...
Tagged as: ,

How Does Evolution Escape Local Maxima?

I’ve spend most of my intellectual life as a theorist, but alas it has been a while since I’ve taken the time to learn a new powerful math-based theory. But in the last few days I’ve enjoyed studying Andreas Wagner’s theories of evolutionary innovation and robustness. While Wagner has some well-publicized and reviewed books, such as Arrival of the Fittest (2014) and Robustness and Evolvability in Living Systems (2005), the best description of his key results seems to be found in a largely ignored 2011 book: The Origins of Evolutionary Innovations. Which is based on many academic journal articles.

In one standard conception, evolution does hill-climbing within a network of genotypes (e.g, DNA sequence), rising according to a “fitness” value associated with the phenotype (e.g., tooth length) that results from each genotype. In this conception, a big problem is local maxima: hill-climbing stops once all the neighbors of a genotype have a lower fitness value. There isn’t a way to get to a higher peak if one first must travel through a lower valley to reach it. Maybe random noise could let the process slip through a narrow shallow valley, but what about valleys that are wide and deep? (This is a familiar problem in computer-based optimization search.)

Wagner’s core model looks at the relation between genotypes and phenotypes for metabolism in an organism like E. coli. In this context, Wagner defines a genotype as the set of chemical reactions which the enzymes of an organism can catalyze, and he defines a phenotype as the set of carbon-source molecules from which an organism could create all the other molecules it needs, assuming that this source was its only place to get carbon (but allowing many sources of other needed molecules). Wagner defines the neighbors of a genotype as those that differ by just one reaction.

There are of course far more types of reactions between molecules than there are types of molecules. So using Wagner’s definitions, the set of genotypes is vastly larger than the set of phenotypes. Thus a great many genotypes result in exactly the same phenotype, and in fact each genotype has many neighboring genotypes with that same exact phenotype. And if we lump all the connected genotypes that have the same phenotype together into a unit (a unit Wagner calls a “genotype network”), and then look at the network of one-neighbor connections between such units, we will find that this network is highly connected.

That is, if one presumes that evolution (using a large population of variants) finds it easy to make “neutral” moves between genotypes with exactly the same phenotype, and hence the same fitness, then large networks connecting genotypes with the same phenotype imply that it only takes a few non-neutral moves between neighbors to get to most other phenotypes. There are no wide deep valleys to cross. Evolution can search large spaces of big possible changes, and doesn’t have a problem finding innovations with big differences.

Wagner argues that there are also far more genotypes than phenotypes for two other cases: the evolution of DNA sequences that set the regulatory interactions among regulatory proteins, and for the sequences of ribonucleotides or amino acids that determine the structure and chemical activity of molecules.

In addition, Wagner also shows the same applies to a computer logic gate toy problem. In this problem, there are four input lines, four output lines, and sixteen binary logic gates between. The genotype specifies the type of each gate and the set of wires connecting all these things, while the phenotype is the mapping between input and output gates. Again, there are far more genotypes than phenotypes. However, the observant reader will notice that all mappings between four inputs and four outputs can be produced using only four internal gates; sixteen gates is a factor of four more than needed. But in the case of four gates the set of genotypes is not big enough compared to the set of phenotypes to allow easy evolution. For easy innovation, sixteen gates is enough, but four gates is not.

If we used a larger space of genotypes within which the number of logic gates could vary, and if the fitness function had a penalty for using more logical gates, then we’d have a problem. No matter where the genotype started, evolution might quickly cut the number of gates down to the minimum needed to implement its current input-output mapping, and then after that too few neutral changes would be possible to make evolution easy. The same problem seems possible in Wagner’s core model of metabolism; if the fitness function has a penalty for the number of enzymes used, evolution might throw away enzymes not needed to produce the current phenotype, after which too few neutral changes might be possible to allow easy evolution.

Wagner’s seems to suggest a solution: larger more complex systems are needed for robustness to varying environments:

Based on our current knowledge, the metabolic reaction networks of E. coli and yeast comprise more than 900 chemical reactions. However in a glucose minimal environment, more than 60 percent of these reactions are silent. … Overall, in E. coli, the fraction of reactions that would not reduce bio-mass growth when eliminated exceeds 70 percent. This is … a general property of viable networks that have similar complexity. … As a metabolic generalist, the E. coli metabolic network can synthesize its biomass from more than 80 alternative carbon sources. … All these observations indicate that the large metabolic networks of free-living organisms are much more complex than necessary to sustain life in any one environment. Their complexity arises from their viability in multiple environments. A consequence is that these networks appear highly robust to reaction removal in any one environment, where every metabolic networks has multiple natural neighbors. This neutrality, however, is conditional on the environment. (pp.153-154)

I’m not sure this solves the problem, however. In the logic gate toy problem, even if phenotype fitness is given by a weighted average over environments, we’ll still have the same temptation to increase fitness by dropping gates not needed to implement the current best bit mapping. In the case of enzymes for metabolism, fitness given by a weighted average of environments may also promote an insufficient complexity of enzymes. It seems we need a model that can represent the value of holding gate or enzyme complexity in reserve against the possibility of future changes.

I worry that this more realist model, whatever it may be, may contain a much larger set of phenotypes, so that the set of genotypes is no longer much larger, and so no longer guarantees many neutral changes to genotypes. Perhaps a “near neutrality” will apply, so that many genotype neighbors have only small fitness differences. But it may require a much more complex analysis to show that outcome; mere counting may not be enough. I still find it hard to believe that for realistic organisms, the set of possible phenotypes is much less than the set of genotypes. Though perhaps I could believe that many pairs of genotypes produce the same distribution over phenotypes, as environments vary.

Added 10am: Another way to say this: somehow the parameter that sets how much complexity to keep around has to change a lot slower than do most other parameters encoded in the genome. In this way it could notice the long term evolvability benefits of complexity.

GD Star Rating
loading...
Tagged as: ,

More Than Death, Fear Decay

Most known “systems” decay, rot, age, and die. We usually focus on the death part, but the more fundamental problem is decay (a.k.a. rotting, aging). Death is almost inevitable, as immortality is extremely difficult to achieve. Systems that don’t decay can still die; we sometimes see systems where the chance of death stays constant over time. But for most complex systems, the chance of death rises with time, due to decay.

Many simple physical systems, like chairs, decay because the materials of their parts decay. Such systems can often be rejuvenated by replacing those materials. More generally, simple modular systems can be rejuvenated by replacing the modular parts that decay. For example, it is possible to spend enough to maintain most cars and buildings indefinitely in a nearly original condition, though we rarely see this as worth the bother.

Complex adaptive systems (CAS), such as firms, have many parts in complex relations, relations that change in an attempt to adapt to changing conditions. When a CAS changes its design and structure to adapt, however, this rarely results in modular sub-designs that can be swapped out. Alas, the designs of most known CAS decay as they adapt. In biological organisms this is called “aging”, in software it is called “rot”, and in product design this is called the “innovators dilemma”. Human brains change from having “fluid” to “crystalized” intelligence, and machine learning systems trained in one domain usually find it harder to learn quite different domains. We also see aging in production plans, firms, empires, and legal systems. I don’t know of data on whether things like cities, nations, professions, disciplines, languages, sports, or art genres age. But it isn’t obvious that they don’t also decay.

It is not just that it is easier to create and train new CAS, relative to rejuvenating old ones. It seems more that we just don’t know how to prevent rot at any remotely reasonable cost. In software, designers often try to “refactor” their systems to slow the process of aging. And sometimes such designers report that they’ve completely halted aging. But these exceptions are mostly in systems that are small and simple, with stable environments, or with crazy amounts of redesign effort.

However, I think we can see at least one clear exception to this pattern of rotting CAS: some generalist species. If the continually changing environment of Earth caused all species to age at similar rates, then over the history of life on Earth we would see a consistent trend toward a weaker ability of life to adapt to changing conditions. Eventually life would lose its ability to sufficient adapt, and life would die out. If some kinds of life could survive in a few very slowly changing garden environments, then eventually all life would descend from the stable species that waited unchanging in those few gardens. The longer it had been since a species had descended from a stable garden species, the faster that species would die out.

But that isn’t what we see. Instead, while species that specialize to particular environments do seem to go extinct more easily, generalist species seem to maintain their ability to adapt across eons, even after making a great many adaptations. Somehow, the designs of generalist species do not seem to rot, even though typical organisms within that species do rot. How do they do that?

It is possible that biological evolution has discovered some powerful design principles of which we humans are still ignorant. If so, then eventually we may learn how to cheaply make CAS that don’t rot. But in this case, why doesn’t evolution use those anti-rot design principles to create individual organisms that don’t decay or age? Evolution seems to judge it much more cost effective to make individual organisms that rot. A more likely hypothesis is that there is no cheap way to prevent rot; evolution has just continually paid a large cost to prevent rot. Perhaps early on, some species didn’t pay this cost, and won for a while. But eventually they died from rot, leaving only non-rotting species to inherit the Earth. It seems there must be some level in a system that doesn’t rot, if it is to last over the eons, and selection has ensured that the life we now see has such a level.

If valid, this perspective suggests a few implications for the future of life and civilization. First, we should seriously worry about which aspects of our modern civilization system are rotting. Human culture has lasted a million years, but many parts of our modern world are far younger. If the first easiest version of a system that we can find to do something is typically be a rotting system, and if it takes a lots more work to find a non-rotting version, should we presume that most of the new systems we have are rotting versions? Farming-era empires consistently rotted; how sure can we be that our world-wide industry-era empire isn’t similarly rotting today? We may be accumulating a technical debt that will be expensive to repay. Law and regulation seem to be rotting; should we try to induce a big refactoring there? Should we try to create and preserve contrarian subcultures or systems that are less likely to crash with the dominant culture and system?

Second, we should realize that it may be harder than we thought to switch to a non-biological future. We humans are now quite tied to the biosphere, and would quickly die if biology were to die. But we have been slowly building systems that are less closely tied to biology. We have been digging up materials in mines, collecting energy directly from atoms and the Sun, and making things in factories. And we’ve started to imagine a future where the software in our brains is copied into factory-made hardware, i.e., ems, joined there by artificial software. At which point our descendants might no longer depending on biological systems. But replacing biological systems with our typically rotting artificial systems may end badly. And making artificial systems that don’t rot may be a lot more expensive and time-consuming that we’ve anticipated.

Some imagine that we will soon discover a simple powerful general learning algorithm, which will enable us to make a superintelligence, a super-smart hyper-consistent eternal mind with no internal conflicts and an arbitrary abilities to indefinitely improve itself, make commitments, and preserve its values. This mind would then rule the universe forever more, at least until it met its alien equivalent. I expect that these visions have not sufficiently considered system rot, among other issues.

In my first book I guessed that during the age of em, individual ems would become fragile over time, and after a few subjective centuries they’d need to be replaced by copies of fresh scans of young humans. I also guessed that eventually it would become possible to substantially redesign brains, and that the arrival of this ability might herald the start of the next age after the age of em. If this requires figuring out how to make non-rotting versions of these new systems, the age of em might last even longer than one would otherwise guess.

GD Star Rating
loading...
Tagged as: , ,

Brains Simpler Than Brain Cells?

Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.

That is, to make ordinary AI we need to find algorithms that can substitute for most everything useful that a human brain does. But to make brain emulations, we need only find models that can substitute for what brain cells do for brains: take input signals, change internal states, and then send output signals. (Such brain cell models need not model most of the vast complexity of cells, complexity that lets cells reproduce, defend against predators, etc.)

To make an em, we will also require brain scans at a sufficient spatial and chemical resolution, and enough cheap fast parallel computers. But the difficulty of achieving these other requirements scales with the difficulty of modeling brain cells. The simpler brain cells are, the less detail we’ll need to scan, and the smaller computers we’ll need to emulate them. So the relative difficulty of ems vs ordinary AI mainly comes down to the relative model complexity of brain cells versus brains.

Today we are seeing a burst of excitement about rapid progress in ordinary AI. While we’ve seen such bursts every decade or two for a long time, many people say “this time is different.” Just as they’ve done before; for a long time the median published forecast has said human level AI will appear in thirty years, and the median AI researcher surveyed has said forty years. (Even though such people estimate 5-10x slower progress in their subfield in the past twenty years.)

In contrast, we see far less excitement now about about rapid progress in brain cell modeling. Few neuroscientists publicly estimate brain emulations soon, and no one has even bothered to survey them. Many take these different levels of hype and excitement as showing that in fact brains are simpler than brain cells – we will more quickly find models and algorithms that substitute for brains than we will those that can substitute for brain cells.

Now while it just isn’t possible for brains to be simpler than brain cells, it is possible for our best models that substitute for brains to be simpler than our best models that substitute for brain cells. This requires only that brains be far more complex than our best models that substitute for them, and that our best models that substitute for brain cells are not far less complex than such cells. That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling. While evolution searched hard for simpler cheaper variations on the first design it found that could do the job, all of its attempts to simplify brains and brain cells destroyed the overall intelligence that it sought to maintain.

So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates. And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc. Such people tend to see such human things as useless wastes.

In contrast, I see the term “intelligence” as mostly used to mean “mental betterness.” And I don’t see a good reason to think that intelligence is intrinsically much simpler than betterness. Human brains sure look complex, and even if big chucks of them by volume may be modeled simply, the other chunks can contain vast complexity. Humans really do a very wide range of tasks, and successful artificial systems have only done a small range of those tasks. So even if each task can be done by a relatively simple system, it may take a complex system to do them all. And most of the distinctive features of human minds and societies seem to me functional – something like them seems useful in most large advanced societies.

In contrast, for the parts of the brain that we’ve been able to emulate, such as parts that process the first inputs of sight and sound, what brain cells there do for the brain really does seem pretty simple. And in most brain organs what most cells do for the body is pretty simple. So the chances look pretty good that what most brain cells do for the brain is pretty simple.

So my bet is that brain cells can be modeled more simply than can entire brains. But some seem to disagree.

GD Star Rating
loading...
Tagged as: , ,

Tyler Says Never Ems

There are smart intellectuals out there think economics is all hogwash, and who resent economists continuing on while their concerns have not been adequately addressed. Similarly, people in philosophy of religion and philosophy of mind resent cosmologists and brain scientists continuing on as if one could just model cosmology without a god, or reduce the mind to physical interactions of brain cells. But in my mind such debates have become so stuck that there is little point in waiting until they are resolved; some of us should just get on with assuming particular positions, especially positions that seem so very reasonable, even obvious, and seeing where they lead.

Similarly, I have heard people debate the feasibility of ems for many decades, and such debates have similarly become stuck, making little progress. Instead of getting mired in that debate, I thought it better to explore the consequences of what seems to me the very reasonable positions that ems will eventually be possible. Alas, that mud pit has strong suction. For example, Tyler Cowen:

Do I think Robin Hanson’s “Age of Em” actually will happen? … my answer is…no! .. Don’t get me wrong, I still think it is a stimulating and wonderful book.  And if you don’t believe me, here is The Wall Street Journal:

Mr. Hanson’s book is comprehensive and not put-downable.

But it is best not read as a predictive text, much as Robin might disagree with that assessment.  Why not?  I have three main reasons, all of which are a sort of punting, nonetheless on topics outside one’s areas of expertise deference is very often the correct response.  Here goes:

1. I know a few people who have expertise in neuroscience, and they have never mentioned to me that things might turn out this way (brain scans uploaded into computers to create actual beings and furthermore as the dominant form of civilization).  Maybe they’re just holding back, but I don’t think so.  The neuroscience profession as a whole seems to be unconvinced and for the most part not even pondering this scenario. ..

3. Robin seems to think the age of Em could come about reasonably soon. …  Yet I don’t see any sign of such a radical transformation in market prices. .. There are for instance a variety of 100-year bonds, but Em scenarios do not seem to be a factor in their pricing.

But the author of that Wall Street Journal review, Daniel J. Levitin, is a neuroscientist! You’d think that if his colleagues thought the very idea of ems iffy, he might have mentioned caveats in his review. But no, he worries only about timing:

The only weak point I find in the argument is that it seems to me that if we were as close to emulating human brains as we would need to be for Mr. Hanson’s predictions to come true, you’d think that by now we’d already have emulated ant brains, or Venus fly traps or even tree bark.

Because readers kept asking, in the book I give a concrete estimate of “within roughly a century or so.” But the book really doesn’t depend much on that estimate. What it mainly depends on is ems initiating the next huge disruption on the scale of the farming or industrial revolutions. Also, if the future is important enough to have a hundred books exploring scenarios, it can be worth having books on scenarios with only a 1% chance of happening, and taking those books seriously as real possibilities.

Tyler has spent too much time around media pundits if he thinks he should be hearing a buzz about anything big that might happen in the next few centuries! Should he have expected to hear about cell phones in 1960, or smart phones in 1980, from a typical phone expert then, even without asking directly about such things? Both of these were reasonable foreseen many decades in advance, yet you’d find it hard to see signs of them several decades before they took off in casual conversations with phone experts, or in phone firm stock prices. (Betting markets directly on these topics would have seen them. Alas we still don’t have such things.)

I’m happy to accept neuroscientist expertise, but mainly on in how hard it is to scan brain cells and model them on computers. This isn’t going to come up in casual conversation, but if asked neuroscientists will pretty much all agree that it should eventually be be possible to create computer models of brain cells that capture their key signal processing behavior, i.e., the part that matters for signals received by the rest of the body. They will say it is a matter of when, not if. (Remember, we’ve already done this for the key signal processing behaviors of eyes and ears.)

Many neuroscientists won’t be familiar with computer modeling of brain cell activity, so they won’t have much of an idea of how much computing power is needed. But for those familiar with computer modeling, the key question is: once we understand brain cells well, what are plausible ranges for 1) the number of bits required store the current state of each inactive brain cell, and 2) how many computer processing steps (or gate operations) per second are needed to mimic an active cell’s signal processing.

Once you have those numbers, you’ll need to talk to people familiar with computing cost projections to translate these computing requirements into dates when they can be met cheaply. And then you’d need to talk to economists (like me) to understand how that might influence the economy. You shouldn’t remotely expect typical neuroscientists to have good estimates there. And finally, you’ll have to talk to people who think about other potential big future disruptions to see how plausible it is that ems will be the first big upcoming disruption on the scale of the farming or industrial revolutions.

GD Star Rating
loading...
Tagged as: ,

Reply to Jones on Ems

In response to Richard Jones’ book review, I said:

So according to Jones, we can’t trust anthropologists to describe foragers they’ve met, we can’t trust economics when tech changes society, and familiar design principles fail for understanding brains and tiny chemical systems. Apparently only his field, physics, can be trusted well outside current experience. In reply, I say I’d rather rely on experts in each field, relative to his generic skepticism. Brain scientists see familiar design principles as applying to brains, even when designed by evolution, economists see economics as applying to past and distant societies with different tech, and anthropologists think they can understand cultures they visit.

Jones complained on twitter that I “prefer to argue from authority rather than engage with their substance.” I replied “There can’t be much specific response to generic skepticism,” to which he replied, “Well, there’s more than 4000 words of quite technical argument on the mind uploading question in the post I reference.” He’s right that he wrote 4400 words. But let me explain why I see them more as generic skepticism than technical argument.

For context, note that there are whole fields of biological engineering, wherein standard engineering principles are used to understand the engineering of biological systems. These include the design of many specific systems with organisms, such as lungs, blood, muscles, bone, and skin, and also specific subsystems within cells, and also standard behaviors, such as gait rhythms and foraging patterns. Standard design principles are also used to understand why cells are split into different modules that perform distinct functions, instead of having each cell try to contribute to all functions, and why only a few degrees of freedom for each cell matters for that cell’s contribution to its system. Such design principles can also be used to understand why systems are abstract, in the sense of as having only one main type of muscle, for creating forces used for many purposes, one main type of blood system, to move most everything around, or only one main fast signal system, for sending signals of many types.

Our models of the function of many key organs have in fact often enabled us to create functional replacements for them. In addition, we already have good models of, and successful physical emulations of, key parts of the brain’s input and out, such, as input from eyes and ears, and output to arms and legs.

Okay, now here are Jones’ key words:

This separation between the physical and the digital in an integrated circuit isn’t an accident or something pre-ordained – it happens because we’ve designed it to be that way. For those of us who don’t accept the idea of intelligent design in biology, that’s not true for brains. There is no clean “digital abstraction layer” in a brain – why should there be, unless someone designed it that way?

But evolution does design, and its designs do respect standard design principles. Evolution has gained by using both abstraction and modularity. Organs exist. Humans may be better in some ways than evolution at searching large design spaces, but biology definitely designs.

In a brain, for example, the digital is continually remodelling the physical – we see changes in connectivity and changes in synaptic strength as a consequence of the information being processed, changes, that as we see, are the manifestation of substantial physical changes, at the molecular level, in the neurons and synapses.

We have programmable logic devices, such as FPGAs, which can do exactly this.

Underlying all these phenomena are processes of macromolecular shape change in response to a changing local environment. .. This emphasizes that the fundamental unit of biological information processing is not the neuron or the synapse, it’s the molecule.

But you could make that same sort of argument about all organs, such as bones, muscles, lungs, blood, etc., and say we also can’t understand or emulate them without measuring and modeling them them in molecular detail. Similarly for the brain input/output systems that we have already emulated.

Determining the location and connectivity of individual neurons .. is necessary, but far from sufficient condition for specifying the informational state of the brain. .. The molecular basis of biological computation means that it isn’t deterministic, it’s stochastic, it’s random.

Randomness is quite easy to emulate, and most who see ems as possible expect to need brain scans with substantial chemical, in addition to spatial, resolution.

And that’s it, that is Jones’ “technical” critique. Since biological systems are made by evolution human design principles don’t apply, and since they are made of molecules one can’t emulate them without measuring and modeling at the molecular level. Never mind that we have actually seen design principles apply, and emulated while ignoring molecules. That’s what I call “generic skepticism”.

In contrast, I say brains are signal processing systems, and applying standard design principles to such systems tells us:

To manage its intended input-output relation, a signal processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. ..  To emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system. This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded.

GD Star Rating
loading...
Tagged as: , ,

Monster Pumps

Yesterday’s Science has a long paper on an exciting new scaling law. For a century we’ve known that larger organisms have lower metabolisms, and thus lower growth rates. Metabolism goes as size to the power of 3/4 over at least twenty orders of magnitude:

BodyScaling

So our largest organisms have a per-mass metabolism one hundred thousand times lower than our smallest organisms.

The new finding is that local metabolism also goes as local biomass density to the power of roughly 3/4, over at least three orders of magnitude. This implies that life in dense areas like jungles is just slower and lazier on average than is life in sparse areas like deserts. And this implies that the ratio of predator to prey biomass is smaller in jungles compared to deserts.

When I researched how to cool large em cities I found that our best cooling techs scale quite nicely, and so very big cities need only pay a small premium for cooling compared to small cities. However, I’d been puzzled about why biological organisms seem to pay much higher premiums to be large. This new paper inspired me to dig into the issue.

What I found is that human engineers have figured ways to scale large fluid distribution systems that biology has just never figured out. For example, the hearts that pump blood through animals are periodic pumps, and such pumps have the problem that the pulses they send through the blood stream can reflect back from joints where blood vessels split into smaller vessels. There are ways to design joints to eliminate this, but those solutions create a total volume of blood vessels that doesn’t scale well. Another problem is that blood vessels taking blood to and from the heart are often near enough to each other to leak heat, which can also create a bad scaling problem.

The net result is that big organisms on Earth are just noticeably sluggish compared to small ones. But big organisms don’t have to be sluggish, that is just an accident of the engineering failures of Earth biology. If there is a planet out there where biology has figured out how to efficiently scale its blood vessels, such as by using continuous pumps, the organisms on that planet will have fewer barriers to growing large and active. Efficiently designed large animals on Earth could easily have metabolisms that are thousands of times faster than in existing animals. So, if you don’t already have enough reasons to be scared of alien monsters, consider that they might have far faster metabolisms, and also very large.

This seems yet another reason to think that biology will soon be over. Human culture is inventing so many powerful advances that biology never found, innovations that are far easier to integrate into the human economy than into biological designs. Descendants that integrate well into the human economy will just outcompete biology.

I also spend a little time thinking about how one might explain the dependence of metabolism on biomass density. I found I could explain it by assuming that the more biomass there is in some area, the less energy each biomass gets from the sun. Specifically, I assume that the energy collected from the sun by the biomass in some area has a power law dependence on the biomass in that area. If biomass were very efficiently arranged into thin solar collectors then that power would be one. But since we expect some biomass to block the view of other biomass, a problem that gets worse with more biomass, the power is plausibly less than one. Let’s call a this power that relates biomass density B to energy collected per area E. As in E = cBa.

There are two plausible scenarios for converting energy into new biomass. When the main resource need to make new biomass via metabolism is just energy to create molecules that embody more energy in their arrangement, then M = cBa-1, where M is the rate of production of new biomass relative to old biomass. When new biomass doesn’t need much energy, but it does need thermodynamically reversible machinery to rearrange molecules, then M = cB(a-1)/2. These two scenarios reproduce the observed 3/4 power scaling law when a = 3/4 and 1/2 respectively. When making new biomass requires both simple energy and reversible machinery, the required power a is somewhere between 1/2 and 3/4.

Added 14Sep: On reflection and further study, it seems that biologists just do not have a good theory for the observed 3/4 power. In addition, the power deviates substantially from 3/4 within smaller datasets.

GD Star Rating
loading...
Tagged as: , ,

More Whales Please

I was struck by this quote in the paper cited in my last post:

The biosphere considered as a whole has managed to expand the amount of solar energy captured for metabolism to around 5%, limited by the nonuniform presence of key nutrients across the Earth’s surface — primarily fresh water, phosphorus, and nitrogen. Life on Earth is not free-energy-limited because, up until recently, it has not had the intelligence and mega-engineering to distribute Earth’s resources to all of the places solar energy happens to fall, and so it is, in most places, nutrient-limited. (more)

That reminded me of reading earlier this year about how whale poop was once a great nutrient distributor:

A couple of centuries ago, the southern seas were packed with baleen whales. Blue whales, the biggest creatures on Earth, were a hundred times more plentiful than they are today. Biologists couldn’t understand how whales could feed themselves in such an iron-poor environment. And now we may have an answer: Whales are extraordinary recyclers. What whales consume (which is a lot), they give back. (more)

It seems we should save (and expand) the whales because of their huge positive externality on other fish. If humans manage to increase the fraction of solar energy used by life on Earth, it will be primarily because of trade and transport. Transport gives us the ability to move lots of nutrients, and trade gives us the incentives to move them.

GD Star Rating
loading...
Tagged as:

Irreducible Detail

Our best theories vary in generality. Some theories are very general, but most are more context specific. Putting all of our best theories together usually doesn’t let us make exact predictions on most variables of interest. We often express this fact formally in our models via “noise,” which represents other factors that we can’t yet predict.

For each of our theories there was a point in time when we didn’t have it yet. Thus we expect to continue to learn more theories, which will let us make more precise predictions. And so it might seem like we can’t constrain our eventual power of prediction; maybe we will have powerful enough theories to predict everything exactly.

But that doesn’t seem right either. Our best theories in many areas tell us about fundamental limits on our prediction abilities, and thus limits on how powerful future simple general theories could be. For example:

  • Thermodynamics – We can predict some gross features of future physical states, but the entropy of a system sets a very high (negentropy) cost to learn precise info about the state of that system. If thermodynamics is right, there will never be a general theory to let one predict future states more cheaply than this.
  • Finance – Finance theory has identified many relevant parameters to predict the overall distribution of future assets returns. However, finance theory strongly suggests that it is usually very hard to predict details of the specific future returns of specific assets. The ability to do so would be worth such a huge amount that there just can’t be many who often have such an ability. The cost to gain such an ability must usually be more than the gains from trading it.
  • Cryptography – A well devised code looks random to an untrained eye. As there are a great many possible codes, and a great many ways to find weaknesses in them, it doesn’t seem like there could be any general way to break all codes. Instead code breaking is a matter of knowing lots of specific things about codes and ways they might be broken. People use codes when they expect the cost of breaking them to be prohibitive, and such expectations are usually right.
  • Innovation – Economic theory can predict many features of economies, and of how economies change and grow. And innovation contributes greatly to growth. But economists also strongly expect that the details of particular future innovations cannot be predicted except at a prohibitive cost. Since knowing of innovations ahead of time can often be used for great private profit, and would speed up the introduction of those innovations, it seems that no cheap-to-apply simple general theories can exist which predict the details of most innovations well ahead of time.
  • Ecosystems – We understand some ways in which parameters of ecosystems correlate with their environments. Most of these make sense in terms of general theories of natural selection and genetics. However, most ecologists strongly suspect that the vast majority of the details of particular ecosystems and the species that inhabit them are not easily predictable by simple general theories. Evolution says that many details will be well matched to other details, but to predict them you must know much about the other details to which they match.

In thermodynamics, finance, cryptography, innovations, and ecosystems, we have learned that while there are many useful generalities, the universe is also chock full of important irreducible incompressible detail. As this is true at many levels of abstraction, I would add this entry to the above list:

  • Intelligence – General theories tell us what intelligence means, and how it can generalize across tasks and contexts. But most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours. Ordinary software also gets smart by containing many powerful modules. While the architecture that organizes those modules can make some difference, that difference is mostly small compared to having more better modules. In a world of competing software firms, most ways to improve modules or find new ones cost more than the profits they’d induce.

If most value in intelligence comes from the accumulation of many expensive parts, there may well be no powerful general theories to be discovered to revolutionize future AI, and give an overwhelming advantage to the first project to discover them. Which is the main reason that I’m skeptical about AI foom, the scenario where an initially small project quickly grows to take over the world.

Added 7p: Peter McCluskey has thoughtful commentary here.

GD Star Rating
loading...
Tagged as: , , , ,

Does complexity bias biotechnology towards doing damage?

A few months ago I attended the Singularity Summit in Australia. One of the presenters was Randal Koene (videos here), who spoke about technological progress towards whole brain emulation, and some of the impacts this advance would have.

Many enthusiasts – including Robin Hanson on this blog – hope to use mind uploading to extend their own lives. Mind uploading is an alternative to more standard ‘biological’ methods for preventing ageing proposed by others such as Aubrey de Gray of the Methuselah Foundation. Randal believes that proponents of using medicine to extend lives underestimate the difficulty of what they are attempting to do. The reason is that evolution has led to a large number of complex and interconnected molecular pathways which cause our bodies to age and decay. Stopping one pathway won’t extend your life by much, because another will simply cause your death soon after. Controlling contagious diseases extended our lives, but not for very long, because we ran up against cancer and heart disease. Unless some ‘master ageing switch’ turns up, suspending ageing will require discovering, unpacking and intervening in dozens of things that the body does. Throwing out the body, and taking the brain onto a computer, though extremely difficult, might still be the easier option.

This got me thinking about whether biotechnology can be expected to help or hurt us overall. My impression is that the practical impact of biotechnology on our lives has been much less than most enthusiasts expected. I was drawn into a genetics major at university out of enthusiasm for ideas like ‘golden rice’ and ‘designer babies’, but progress towards actually implementing these technologies is remarkably slow. Pulling apart the many kludges evolution has thrown into existing organisms is difficult. Manipulating them to reliably get the change you want, without screwing up something else you need, even more so.

Unfortunately, while making organisms work better is enormously challenging, damaging them is pretty easy. For a human to work, a lot needs to go right. For a human to fail, not much needs to go wrong. As a rule, fiddling with a complex system is a lot more likely to ruin it than improve it. As a result, a simple organism like the influenza virus can totally screw us up, even though killing its host offers it no particular evolutionary advantage:

Few pathogens known to man are as dangerous as the H5N1 avian influenza virus. Of the 600 reported cases of people infected, almost 60 per cent have died. The virus is considered so dangerous in the UK and Canada that research can only be performed in the highest biosafety level laboratory, a so-called BSL-4 lab. If the virus were to become readily transmissible from one person to another (it is readily transmissible between birds but not humans) it could cause a catastrophic global pandemic that would substantially reduce the world’s population.

The 1918 Spanish flu pandemic was caused by a virus that killed less than 2 per cent of its victims, yet went on to kill 50m worldwide. A highly pathogenic H5N1 virus that was as easily transmitted between humans could kill hundreds of millions more.

GD Star Rating
loading...
Tagged as: , ,