Tag Archives: Biology

Theories Of Unnatural Selection

In my career I’ve worked in an unusually large number of academic disciplines: physics, computer science, social science, psychology, engineering, and philosophy. But on a map of academic disciplines, where fields that cite each other often are put closer together, all my fields are clumped together on one side. The fields furthest away from my clump, on the opposite side, are biology, biochemistry, and medicine.

It seems to me that my fields tend to emphasize relatively general theory and abstraction, while the opposite fields tend to have far fewer useful abstractions, and instead have a lot more detail to master. People tend to get sorted into fields based on part on their ability and taste for abstractions, and the people I’ve met who do biochemistry and medicine tend to have amazing abilities to recall relevant details, but they also tend to be pretty bad at abstractions. For example they often struggle with simple cost-benefit analysis and statistical inference.

All of which is to say that biologists tend to be bad at abstraction. This tends to make them bad at thinking about the long-term future, where abstraction is crucial. For example, I recently reviewed The Zoologist’s Guide to the Galaxy, wherein a zoologist says that aliens we meet would be much like us, even though they’d be many millions of years more advanced than us, apparently assuming that our descendants will not noticeably change in the next few million years.

And in a new book The Next 500 Years, a geneticist recommends that we take the next few centuries to genetically engineer humans to live in on other planets, apparently unaware that our descendants will most likely be artificial (like ems), who won’t need planets in particular except as a source of raw materials. These two books have been reviewed in prestigious venues, by prestigious biology reviewers who don’t mention these to-me obvious criticisms. Suggesting that our biological elites are all pretty bad at abstraction.

This is a problem because it seems to me we need biologists good at abstraction to help us think about the future. Let me explain.

Computers will be a big deal in the future, even more so than today. Computers will be embedded in and control most all of our systems. So to think well about the future, we need to think think well about very large and advanced computer systems. And since computers allow our many systems to be better integrated, overall all our systems will be larger, more complex, more connected, and more smartly controlled. So to think about the future we need to think well about very large, smart, and complex integrated systems.

Economics will also remain very important in the future. These many systems will be mostly designed, built, and maintained by for-profit firms who sell access to them. These firms will compete to attract customers, investors, workers, managers, suppliers, and complementary products. They will be also taxed and regulated by complex governments. And the future economy will be much larger, making room for more and larger such firms, managing those larger more complex products. So to think well about the future we need to think well about a much larger more complex world of taxed and regulated firms competing to make and sell stuff.

We today have a huge legacy inheritance of designs and systems embedded in biology, systems that perform many essential functions, including supporting our bodies and minds. In the coming centuries, we will either transfer our minds to other more artificial substrates, or replace them entirely with new designs. At which point they won’t need biological bodies; artificial bodies will do fine. We will then either find ways to extract key biological machines and processes from existing biological systems, to use them flexibly as component processes where we wish, or we will replace those machines and processes with flexible artificial versions.

At that point, natural selection of the sort the Earth has seen for the last few billion years will basically come to an end. The universe that we reach by then will be still filled with a vast diversity of active and calculating objects competing to survive. But these objects will be designed not by inherited randomly mutating DNA, and will not be self-sufficient in terms of manufacturing and energy acquisition. They will instead be highly cooperative and interdependent objects, make by competing firms who draw design elements from a wide range of sources, most of them compensated for their contributions.

But even though biology as we know it will then be over, biological theory, properly generalized, should remain quite relevant. Because there will still be vast and rapid competition and selection, and so we will still need ways to think about how that will play out. Thus we need theorists to draw from our best understandings of systems, computers, economics, and biology, to create better ways to think about how all this combines to create a brave new world of unnatural selection.

And while I’ve seen at least glimmerings of such advances from people who think about computers, and from people who think about economics, I’ve yet to see much of anything from people who think about biology. So that seems to me our biggest missing hole here. And thus my plea in this post: please biological theorists, help us think about this. And please people who are thinking about which kind of theory to study, consider learning some biology theory, to help us fill this gap.

GD Star Rating
loading...
Tagged as: , ,

Managed Competition or Competing Managers?

Competition and cooperation [as] opposites, with vice on one side and virtue on the other … is a false dichotomy … The market-based competition envisioned in economics is disciplined by rules and reputations. … Just as competition is not a shorthand for “anything goes,” the quick and thoughtless inference that cooperation is necessarily virtuous is often unjustified. In many cases, cooperation is a tool for an in-group to take advantage of those outside the group. …

Competition refers to a situation in which people or organizations (such as firms) apply their efforts and talents toward a certain goal, and they receive results based substantially on their performance relative to each other. … Cooperation refers to a situation in which the participants seek out win-win outcomes from working together. (More)

Raw unconstrained competition looks scary; lies, betrayal, predation, starvation, war; so many things can go wrong! Which makes “managed competition” sound so comforting; whew, someone will limit the problems. Someone like a boss, police officer, sports referee, or government regulator.

However, raw unconstrained management also looks scary; that’s tyranny, which can go wrong in so so many ways! Such as via incompetence, exploitation, and rot. And so we can be comforted to hear that managers must compete. For example, when individual managers compete for jobs, firms compete for customers, or politicians compete for votes.

But who will guard the guardians? If we embed competitions within larger systems of managers, and also embed managers within larger systems of competition, won’t they all sit within some maximally-encompassing system, which must then be either competition, management, or some awkward mix of the two? This is the fundamental hard problem of design and governance, from which there is no easy escape. Continue reading "Managed Competition or Competing Managers?" »

GD Star Rating
loading...
Tagged as: , ,

A Zoologist’s Guide to Our Past

In his new book The Zoologist’s Guide to the Galaxy: What Animals on Earth Reveal About Aliens–and Ourselves, Cambridge zoologist Arik Kershenbaum purports to tell us what intelligent aliens will be like when we meet them:

This book is about how we can use that realistic scientific approach to draw conclusions, with some confidence, about alien life – and intelligent life in particular. (p.1)

Now, that won’t be for a long time, and they will even then be far more advanced than us:

We are absolutely in the infancy of our technological development, and that makes it exceptionally likely that any aliens we encounter will be more advanced than us. (p.160)

The chances of us encountering intelligent aliens [anytime soon] is so remote as to be almost dismissed. (p.320)

Even so, this is what aliens will be like:

One way to prepare ourselves mentally and practically for First Contact is … to reconcile ourselves to the fact that there are certain properties that intelligent life must have. … their behavior, how they move and feed and come together in societies, will be similar to ours. …

[Aliens and us] both have families and pets, read and write books, and care for our children and our relatives. … this situation is actually very likely. Those evolutionary focus that push us to be the way we are must also be pushing life on other planets to be like us. (pp.322-323)

And this will be their origin story: Continue reading "A Zoologist’s Guide to Our Past" »

GD Star Rating
loading...
Tagged as: , ,

How Bees Argue

The book Honeybee Democracy, published in 2010, has been sitting on my shelf for many years. Getting back into the topic of disagreement, I’ve finally read it. And browsing media articles about the book from back then, they just don’t seem to get it right. So let me try to do better.

In late spring and early summer, … colonies [of ordinary honeybees] become overcrowded … and then cast a swarm. … About a third of the worker bees stay at home and rear a new queen … while two-thirds of the workforce – a group of some ten thousand – rushes off with the old queen to create a daughter colony. The migrants travel only 100 feet or so before coalescing into a beardlike cluster, where they literally hang out together for several hours or a few days. .. [They then] field several hundred house [scouts] to explore some 30 square miles … for potential homesites. (p.6)

These 300-500 scouts are the oldest most experienced bees in the swarm. To start, some of them go searching for sites. Initially a scout takes 13-56 minutes to inspect a site, in part via 10-30 walking journeys inside the cavity. After inspecting a site, a scout returns to the main swarm cluster and then usually wanders around its surface doing many brief “waggle dances” which encode the direction and distance of the site. (All scouting activity stops at night, and in the rain.)

Roughly a dozen sites are discovered via scouts searching on their own. Most scouts, however, are recruited to tout a site via watching another scout dance about it, and then heading out to inspect it. Each dance is only seen by a few immediately adjacent bees. These recruited scouts seem to pick a dance at random from among the one’s they’ve seen lately. While initial scouts, those not recruited via a dance, have an 86% chance of touting their site via dances, recruited scouts only have a 55% chance of doing so.

Once recruited to tout a site, each scout alternates between dancing about it at the home cluster and then returning to the site to inspect it again. After the first visit, re-inspections take only 10-20 minutes. The number of dances between site visits declines with the number of visits, and when it gets near zero, after one to six trips, the bee just stops doing any scouting activity.

This decline in touting is accelerated by direct conflict. Bees that tout one site will sometimes head-butt (and beep at) bees touting other sites. After getting hit ten times, a scout usually quits. (From what I’ve read, it isn’t clear to me if any scout, once recruited to tout a site, is ever recruited again later to tout a different site.)

When scouts are inspecting a site, they make sure to touch the other bees inspecting that site. When they see 20-30 scouts inspecting a site at once, that generally implies that a clear majority of the currently active touting scouts are favoring this site. Scouts from this winning site then return to the main cluster and make a special sound which declares the search to be over. Waiting another hour or so gives enough time for scouts to return from other sites, and then the entire cluster heads off together to this new site.

The process I’ve described so far is enough to get all the bees to pick a site together and then go there, but it isn’t enough to make that be a good site. Yet, in fact, bee swarms seem to pick the best site available to them about 95% of the time. Site quality depends on cavity size, entrance size and height, cavity orientation relative to entrance, and wall health. How do they do pick the best site?

Each scout who inspects a site estimates its quality, and encodes that estimate in its dance about that site. These quality estimates are error-prone; there’s only an 80% chance that a scout will rate a much better site as better. The key that enables swarms to pick better sites is this: between their visits to a site, scouts do a lot more dances for sites they estimate to be higher quality. A scout does a total of 30 dances for a lousy site, but 90 dances for great site.

And that’s how bee swarms argue, re picking a new site. The process only includes an elite of the most experienced 3-5% of bees. That elite all starts out with no opinion, and then slowly some of them acquire opinions, at first directly and randomly via inspecting options, and then more indirectly via randomly copying opinions expressed near them. Individual bees may never change their acquired opinions. The key is that when bees have an opinion, they tend to express them more often when those are better opinions. Individual opinions fade with time, and the whole process stops when enough of a random sample of those expresssing opinions all express the same opinion.

Now that I know all this, it isn’t clear how relevant it is for human disagreement. But it does seem a nice simple example to keep in mind. With bees, a community typically goes from wide disagreement to apparent strong agreement, without requiring particular individuals to ever giving up their strongly held opinions.

GD Star Rating
loading...
Tagged as: ,

End War Or Mosquitoes?

Malaria may have killed half of all the people that ever lived. (more)

Over one million people die from malaria each year, mostly children under five years of age, with 90% of malaria cases occurring in Sub-Saharan Africa. (more)

378,000 people worldwide died a violent death in war each year between 1985 and 1994. (more)

Over the last day I’ve done two Twitter polls, one of which was my most popular poll ever. Each poll was on whether, if we had the option, we should try to end a big old nemesis of humankind. One was on mosquitoes, the other on war:

In both cases the main con argument is a worry about unintended side effects. Our biological and social systems are both very complex, with each part having substantial and difficult to understand interactions with many other parts. This makes it hard to be sure that an apparently bad thing isn’t actually causing good things, or preventing other bad things.

Poll respondents were about evenly divided on ending mosquitoes, but over 5 to 1 in favor of ending war. Yet mosquitoes kill many more people than do wars, mosquitoes are only a small part of our biosphere with only modest identifiable benefits, and war is a much larger part of key social systems with much easier to identify functions and benefits. For example, war drives innovation, deposes tyrants, and cleans out inefficient institutional cruft that accumulates during peacetime. All these considerations favor ending mosquitoes, relative to ending war.

Why then is there so much more support for ending war, relative to mosquitoes? The proximate cause seems obvious: in our world, good people oppose both war and also ending species. Most people probably aren’t thinking this through, but are instead just reacting to this surface ethical gloss. Okay, but why is murderous nature so much more popular than murderous features of human systems? Perhaps in part because we are much more eager to put moral blame on humans, relative to nature. Arguing to keep war makes you seem like allies of deeply evil humans, while arguing to keep mosquitoes only makes you allies of an indifferent nature, which makes you far less evil by association.

GD Star Rating
loading...
Tagged as: , ,

Men Are Animals

I spent much of the (so far) middle of my life pursuing big/deep questions. That’s what I focused on in each subject I learned, that’s what I liked most about the first topic (physics) I specialized in, and that’s what drew me to each new subject as I switched.

It was good that I was able to stop jumping to new subjects, so that I could actually accomplish things. However, as I long avoided studying biology (too much disorganized detail!), but recently found an excuse to focus there, I’ve been enjoying my old pleasure via deep questions of biology. For example, I’ve long heard talk on the puzzle of why sex exists, and have heard many plausible arguments on why sex (mostly) won over asexual reproduction. But until a few days ago I hadn’t noticed a harder puzzle: why do most animals, but not most plants, have males?

Most plants are hermaphrodites; each organism has both male and female genitalia. Plants thus gain the many advantages of recombining genes from multiple parents, while also ensuring that each organism contributes fully to reproducing the species. Most animals, in contrast, reproduce via pairing one male and one female, with females investing more than males in children. In such species, males and females differ in many ways that can be understood as resulting from these differing investments in children.

Many of these differences seem to be costly for the species. For example, not only do males spend most of their resources competing with each other for access to females instead helping with children, their competition often directly harms females and children. In fact, species where males differ more from females go extinct more quickly:

When comparing species, it emerged that those in which males were very different from females had a poorer prognosis for continued existence. The authors’ models predict a tenfold increase in extinction risk per unit time when species in which males are larger than females, with large differences in shape between the sexes, are compared with species in which the males are smaller than the females, with small differences in shape between the sexes. (more)

And yet males exist, at least in most animal species. Why? More monogamous species, like humans, where fathers invest more in kids, are less of a puzzle, but they remain a puzzle as long as males invest less. As plants show that an all-hermaphrodite equilibrium can robustly last long for complex species, there must be some big positive advantage to having males in animal, but not plant, species.

After reading a dozen or so papers, I can report: we just have no idea what that advantage is. One person suggests males are “an ‘experimental’ part of the species that allows the species to expand their ecological niche, and to have alternative configurations.” But this idea doesn’t seem to have been developed very far, and why wouldn’t this work just as well for plants?

The robust existence of animal males strongly suggests that we men have an important but-as-yet-unknown mission. We offer a gain that more than pays for our many costs, at least in most animals species. And yet our costs seem much clearer than our gains. We men might feel a bit better about our place in the world if we could better understood our positive contributions. And yet very few people study this deep question, even as vast numbers remain very engaged discussing human gender politics. That seems a shame to me.

Added 9:30p: Plants do compete for and select mates. It isn’t obvious that mobility allows far more such competition.

Added 4a: You might have seen evolutionary competition as overly destructive, but existing because more cooperation requires more coordination, which is hard. But the existence of males shows that, at least for animals, evolution saw “red in tooth and claw” competition between hermaphrodites as insufficient. So evolution created and maintains an even stronger kind of competition, between males who need invest less in children and can thus invest even more in competition.

GD Star Rating
loading...
Tagged as: ,

How Does Evolution Escape Local Maxima?

I’ve spend most of my intellectual life as a theorist, but alas it has been a while since I’ve taken the time to learn a new powerful math-based theory. But in the last few days I’ve enjoyed studying Andreas Wagner’s theories of evolutionary innovation and robustness. While Wagner has some well-publicized and reviewed books, such as Arrival of the Fittest (2014) and Robustness and Evolvability in Living Systems (2005), the best description of his key results seems to be found in a largely ignored 2011 book: The Origins of Evolutionary Innovations. Which is based on many academic journal articles.

In one standard conception, evolution does hill-climbing within a network of genotypes (e.g, DNA sequence), rising according to a “fitness” value associated with the phenotype (e.g., tooth length) that results from each genotype. In this conception, a big problem is local maxima: hill-climbing stops once all the neighbors of a genotype have a lower fitness value. There isn’t a way to get to a higher peak if one first must travel through a lower valley to reach it. Maybe random noise could let the process slip through a narrow shallow valley, but what about valleys that are wide and deep? (This is a familiar problem in computer-based optimization search.)

Wagner’s core model looks at the relation between genotypes and phenotypes for metabolism in an organism like E. coli. In this context, Wagner defines a genotype as the set of chemical reactions which the enzymes of an organism can catalyze, and he defines a phenotype as the set of carbon-source molecules from which an organism could create all the other molecules it needs, assuming that this source was its only place to get carbon (but allowing many sources of other needed molecules). Wagner defines the neighbors of a genotype as those that differ by just one reaction.

There are of course far more types of reactions between molecules than there are types of molecules. So using Wagner’s definitions, the set of genotypes is vastly larger than the set of phenotypes. Thus a great many genotypes result in exactly the same phenotype, and in fact each genotype has many neighboring genotypes with that same exact phenotype. And if we lump all the connected genotypes that have the same phenotype together into a unit (a unit Wagner calls a “genotype network”), and then look at the network of one-neighbor connections between such units, we will find that this network is highly connected.

That is, if one presumes that evolution (using a large population of variants) finds it easy to make “neutral” moves between genotypes with exactly the same phenotype, and hence the same fitness, then large networks connecting genotypes with the same phenotype imply that it only takes a few non-neutral moves between neighbors to get to most other phenotypes. There are no wide deep valleys to cross. Evolution can search large spaces of big possible changes, and doesn’t have a problem finding innovations with big differences.

Wagner argues that there are also far more genotypes than phenotypes for two other cases: the evolution of DNA sequences that set the regulatory interactions among regulatory proteins, and for the sequences of ribonucleotides or amino acids that determine the structure and chemical activity of molecules.

In addition, Wagner also shows the same applies to a computer logic gate toy problem. In this problem, there are four input lines, four output lines, and sixteen binary logic gates between. The genotype specifies the type of each gate and the set of wires connecting all these things, while the phenotype is the mapping between input and output gates. Again, there are far more genotypes than phenotypes. However, the observant reader will notice that all mappings between four inputs and four outputs can be produced using only four internal gates; sixteen gates is a factor of four more than needed. But in the case of four gates the set of genotypes is not big enough compared to the set of phenotypes to allow easy evolution. For easy innovation, sixteen gates is enough, but four gates is not.

If we used a larger space of genotypes within which the number of logic gates could vary, and if the fitness function had a penalty for using more logical gates, then we’d have a problem. No matter where the genotype started, evolution might quickly cut the number of gates down to the minimum needed to implement its current input-output mapping, and then after that too few neutral changes would be possible to make evolution easy. The same problem seems possible in Wagner’s core model of metabolism; if the fitness function has a penalty for the number of enzymes used, evolution might throw away enzymes not needed to produce the current phenotype, after which too few neutral changes might be possible to allow easy evolution.

Wagner’s seems to suggest a solution: larger more complex systems are needed for robustness to varying environments:

Based on our current knowledge, the metabolic reaction networks of E. coli and yeast comprise more than 900 chemical reactions. However in a glucose minimal environment, more than 60 percent of these reactions are silent. … Overall, in E. coli, the fraction of reactions that would not reduce bio-mass growth when eliminated exceeds 70 percent. This is … a general property of viable networks that have similar complexity. … As a metabolic generalist, the E. coli metabolic network can synthesize its biomass from more than 80 alternative carbon sources. … All these observations indicate that the large metabolic networks of free-living organisms are much more complex than necessary to sustain life in any one environment. Their complexity arises from their viability in multiple environments. A consequence is that these networks appear highly robust to reaction removal in any one environment, where every metabolic networks has multiple natural neighbors. This neutrality, however, is conditional on the environment. (pp.153-154)

I’m not sure this solves the problem, however. In the logic gate toy problem, even if phenotype fitness is given by a weighted average over environments, we’ll still have the same temptation to increase fitness by dropping gates not needed to implement the current best bit mapping. In the case of enzymes for metabolism, fitness given by a weighted average of environments may also promote an insufficient complexity of enzymes. It seems we need a model that can represent the value of holding gate or enzyme complexity in reserve against the possibility of future changes.

I worry that this more realist model, whatever it may be, may contain a much larger set of phenotypes, so that the set of genotypes is no longer much larger, and so no longer guarantees many neutral changes to genotypes. Perhaps a “near neutrality” will apply, so that many genotype neighbors have only small fitness differences. But it may require a much more complex analysis to show that outcome; mere counting may not be enough. I still find it hard to believe that for realistic organisms, the set of possible phenotypes is much less than the set of genotypes. Though perhaps I could believe that many pairs of genotypes produce the same distribution over phenotypes, as environments vary.

Added 10am: Another way to say this: somehow the parameter that sets how much complexity to keep around has to change a lot slower than do most other parameters encoded in the genome. In this way it could notice the long term evolvability benefits of complexity.

GD Star Rating
loading...
Tagged as: ,

More Than Death, Fear Decay

Most known “systems” decay, rot, age, and die. We usually focus on the death part, but the more fundamental problem is decay (a.k.a. rotting, aging). Death is almost inevitable, as immortality is extremely difficult to achieve. Systems that don’t decay can still die; we sometimes see systems where the chance of death stays constant over time. But for most complex systems, the chance of death rises with time, due to decay.

Many simple physical systems, like chairs, decay because the materials of their parts decay. Such systems can often be rejuvenated by replacing those materials. More generally, simple modular systems can be rejuvenated by replacing the modular parts that decay. For example, it is possible to spend enough to maintain most cars and buildings indefinitely in a nearly original condition, though we rarely see this as worth the bother.

Complex adaptive systems (CAS), such as firms, have many parts in complex relations, relations that change in an attempt to adapt to changing conditions. When a CAS changes its design and structure to adapt, however, this rarely results in modular sub-designs that can be swapped out. Alas, the designs of most known CAS decay as they adapt. In biological organisms this is called “aging”, in software it is called “rot”, and in product design this is called the “innovators dilemma”. Human brains change from having “fluid” to “crystalized” intelligence, and machine learning systems trained in one domain usually find it harder to learn quite different domains. We also see aging in production plans, firms, empires, and legal systems. I don’t know of data on whether things like cities, nations, professions, disciplines, languages, sports, or art genres age. But it isn’t obvious that they don’t also decay.

It is not just that it is easier to create and train new CAS, relative to rejuvenating old ones. It seems more that we just don’t know how to prevent rot at any remotely reasonable cost. In software, designers often try to “refactor” their systems to slow the process of aging. And sometimes such designers report that they’ve completely halted aging. But these exceptions are mostly in systems that are small and simple, with stable environments, or with crazy amounts of redesign effort.

However, I think we can see at least one clear exception to this pattern of rotting CAS: some generalist species. If the continually changing environment of Earth caused all species to age at similar rates, then over the history of life on Earth we would see a consistent trend toward a weaker ability of life to adapt to changing conditions. Eventually life would lose its ability to sufficient adapt, and life would die out. If some kinds of life could survive in a few very slowly changing garden environments, then eventually all life would descend from the stable species that waited unchanging in those few gardens. The longer it had been since a species had descended from a stable garden species, the faster that species would die out.

But that isn’t what we see. Instead, while species that specialize to particular environments do seem to go extinct more easily, generalist species seem to maintain their ability to adapt across eons, even after making a great many adaptations. Somehow, the designs of generalist species do not seem to rot, even though typical organisms within that species do rot. How do they do that?

It is possible that biological evolution has discovered some powerful design principles of which we humans are still ignorant. If so, then eventually we may learn how to cheaply make CAS that don’t rot. But in this case, why doesn’t evolution use those anti-rot design principles to create individual organisms that don’t decay or age? Evolution seems to judge it much more cost effective to make individual organisms that rot. A more likely hypothesis is that there is no cheap way to prevent rot; evolution has just continually paid a large cost to prevent rot. Perhaps early on, some species didn’t pay this cost, and won for a while. But eventually they died from rot, leaving only non-rotting species to inherit the Earth. It seems there must be some level in a system that doesn’t rot, if it is to last over the eons, and selection has ensured that the life we now see has such a level.

If valid, this perspective suggests a few implications for the future of life and civilization. First, we should seriously worry about which aspects of our modern civilization system are rotting. Human culture has lasted a million years, but many parts of our modern world are far younger. If the first easiest version of a system that we can find to do something is typically be a rotting system, and if it takes a lots more work to find a non-rotting version, should we presume that most of the new systems we have are rotting versions? Farming-era empires consistently rotted; how sure can we be that our world-wide industry-era empire isn’t similarly rotting today? We may be accumulating a technical debt that will be expensive to repay. Law and regulation seem to be rotting; should we try to induce a big refactoring there? Should we try to create and preserve contrarian subcultures or systems that are less likely to crash with the dominant culture and system?

Second, we should realize that it may be harder than we thought to switch to a non-biological future. We humans are now quite tied to the biosphere, and would quickly die if biology were to die. But we have been slowly building systems that are less closely tied to biology. We have been digging up materials in mines, collecting energy directly from atoms and the Sun, and making things in factories. And we’ve started to imagine a future where the software in our brains is copied into factory-made hardware, i.e., ems, joined there by artificial software. At which point our descendants might no longer depending on biological systems. But replacing biological systems with our typically rotting artificial systems may end badly. And making artificial systems that don’t rot may be a lot more expensive and time-consuming that we’ve anticipated.

Some imagine that we will soon discover a simple powerful general learning algorithm, which will enable us to make a superintelligence, a super-smart hyper-consistent eternal mind with no internal conflicts and an arbitrary abilities to indefinitely improve itself, make commitments, and preserve its values. This mind would then rule the universe forever more, at least until it met its alien equivalent. I expect that these visions have not sufficiently considered system rot, among other issues.

In my first book I guessed that during the age of em, individual ems would become fragile over time, and after a few subjective centuries they’d need to be replaced by copies of fresh scans of young humans. I also guessed that eventually it would become possible to substantially redesign brains, and that the arrival of this ability might herald the start of the next age after the age of em. If this requires figuring out how to make non-rotting versions of these new systems, the age of em might last even longer than one would otherwise guess.

GD Star Rating
loading...
Tagged as: , ,

Brains Simpler Than Brain Cells?

Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.

That is, to make ordinary AI we need to find algorithms that can substitute for most everything useful that a human brain does. But to make brain emulations, we need only find models that can substitute for what brain cells do for brains: take input signals, change internal states, and then send output signals. (Such brain cell models need not model most of the vast complexity of cells, complexity that lets cells reproduce, defend against predators, etc.)

To make an em, we will also require brain scans at a sufficient spatial and chemical resolution, and enough cheap fast parallel computers. But the difficulty of achieving these other requirements scales with the difficulty of modeling brain cells. The simpler brain cells are, the less detail we’ll need to scan, and the smaller computers we’ll need to emulate them. So the relative difficulty of ems vs ordinary AI mainly comes down to the relative model complexity of brain cells versus brains.

Today we are seeing a burst of excitement about rapid progress in ordinary AI. While we’ve seen such bursts every decade or two for a long time, many people say “this time is different.” Just as they’ve done before; for a long time the median published forecast has said human level AI will appear in thirty years, and the median AI researcher surveyed has said forty years. (Even though such people estimate 5-10x slower progress in their subfield in the past twenty years.)

In contrast, we see far less excitement now about about rapid progress in brain cell modeling. Few neuroscientists publicly estimate brain emulations soon, and no one has even bothered to survey them. Many take these different levels of hype and excitement as showing that in fact brains are simpler than brain cells – we will more quickly find models and algorithms that substitute for brains than we will those that can substitute for brain cells.

Now while it just isn’t possible for brains to be simpler than brain cells, it is possible for our best models that substitute for brains to be simpler than our best models that substitute for brain cells. This requires only that brains be far more complex than our best models that substitute for them, and that our best models that substitute for brain cells are not far less complex than such cells. That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling. While evolution searched hard for simpler cheaper variations on the first design it found that could do the job, all of its attempts to simplify brains and brain cells destroyed the overall intelligence that it sought to maintain.

So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates. And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc. Such people tend to see such human things as useless wastes.

In contrast, I see the term “intelligence” as mostly used to mean “mental betterness.” And I don’t see a good reason to think that intelligence is intrinsically much simpler than betterness. Human brains sure look complex, and even if big chucks of them by volume may be modeled simply, the other chunks can contain vast complexity. Humans really do a very wide range of tasks, and successful artificial systems have only done a small range of those tasks. So even if each task can be done by a relatively simple system, it may take a complex system to do them all. And most of the distinctive features of human minds and societies seem to me functional – something like them seems useful in most large advanced societies.

In contrast, for the parts of the brain that we’ve been able to emulate, such as parts that process the first inputs of sight and sound, what brain cells there do for the brain really does seem pretty simple. And in most brain organs what most cells do for the body is pretty simple. So the chances look pretty good that what most brain cells do for the brain is pretty simple.

So my bet is that brain cells can be modeled more simply than can entire brains. But some seem to disagree.

GD Star Rating
loading...
Tagged as: , ,

Tyler Says Never Ems

There are smart intellectuals out there think economics is all hogwash, and who resent economists continuing on while their concerns have not been adequately addressed. Similarly, people in philosophy of religion and philosophy of mind resent cosmologists and brain scientists continuing on as if one could just model cosmology without a god, or reduce the mind to physical interactions of brain cells. But in my mind such debates have become so stuck that there is little point in waiting until they are resolved; some of us should just get on with assuming particular positions, especially positions that seem so very reasonable, even obvious, and seeing where they lead.

Similarly, I have heard people debate the feasibility of ems for many decades, and such debates have similarly become stuck, making little progress. Instead of getting mired in that debate, I thought it better to explore the consequences of what seems to me the very reasonable positions that ems will eventually be possible. Alas, that mud pit has strong suction. For example, Tyler Cowen:

Do I think Robin Hanson’s “Age of Em” actually will happen? … my answer is…no! .. Don’t get me wrong, I still think it is a stimulating and wonderful book.  And if you don’t believe me, here is The Wall Street Journal:

Mr. Hanson’s book is comprehensive and not put-downable.

But it is best not read as a predictive text, much as Robin might disagree with that assessment.  Why not?  I have three main reasons, all of which are a sort of punting, nonetheless on topics outside one’s areas of expertise deference is very often the correct response.  Here goes:

1. I know a few people who have expertise in neuroscience, and they have never mentioned to me that things might turn out this way (brain scans uploaded into computers to create actual beings and furthermore as the dominant form of civilization).  Maybe they’re just holding back, but I don’t think so.  The neuroscience profession as a whole seems to be unconvinced and for the most part not even pondering this scenario. ..

3. Robin seems to think the age of Em could come about reasonably soon. …  Yet I don’t see any sign of such a radical transformation in market prices. .. There are for instance a variety of 100-year bonds, but Em scenarios do not seem to be a factor in their pricing.

But the author of that Wall Street Journal review, Daniel J. Levitin, is a neuroscientist! You’d think that if his colleagues thought the very idea of ems iffy, he might have mentioned caveats in his review. But no, he worries only about timing:

The only weak point I find in the argument is that it seems to me that if we were as close to emulating human brains as we would need to be for Mr. Hanson’s predictions to come true, you’d think that by now we’d already have emulated ant brains, or Venus fly traps or even tree bark.

Because readers kept asking, in the book I give a concrete estimate of “within roughly a century or so.” But the book really doesn’t depend much on that estimate. What it mainly depends on is ems initiating the next huge disruption on the scale of the farming or industrial revolutions. Also, if the future is important enough to have a hundred books exploring scenarios, it can be worth having books on scenarios with only a 1% chance of happening, and taking those books seriously as real possibilities.

Tyler has spent too much time around media pundits if he thinks he should be hearing a buzz about anything big that might happen in the next few centuries! Should he have expected to hear about cell phones in 1960, or smart phones in 1980, from a typical phone expert then, even without asking directly about such things? Both of these were reasonable foreseen many decades in advance, yet you’d find it hard to see signs of them several decades before they took off in casual conversations with phone experts, or in phone firm stock prices. (Betting markets directly on these topics would have seen them. Alas we still don’t have such things.)

I’m happy to accept neuroscientist expertise, but mainly on in how hard it is to scan brain cells and model them on computers. This isn’t going to come up in casual conversation, but if asked neuroscientists will pretty much all agree that it should eventually be be possible to create computer models of brain cells that capture their key signal processing behavior, i.e., the part that matters for signals received by the rest of the body. They will say it is a matter of when, not if. (Remember, we’ve already done this for the key signal processing behaviors of eyes and ears.)

Many neuroscientists won’t be familiar with computer modeling of brain cell activity, so they won’t have much of an idea of how much computing power is needed. But for those familiar with computer modeling, the key question is: once we understand brain cells well, what are plausible ranges for 1) the number of bits required store the current state of each inactive brain cell, and 2) how many computer processing steps (or gate operations) per second are needed to mimic an active cell’s signal processing.

Once you have those numbers, you’ll need to talk to people familiar with computing cost projections to translate these computing requirements into dates when they can be met cheaply. And then you’d need to talk to economists (like me) to understand how that might influence the economy. You shouldn’t remotely expect typical neuroscientists to have good estimates there. And finally, you’ll have to talk to people who think about other potential big future disruptions to see how plausible it is that ems will be the first big upcoming disruption on the scale of the farming or industrial revolutions.

GD Star Rating
loading...
Tagged as: ,