Discover more from Overcoming Bias
General Evolvable Brains
Human brains today can do a remarkably wide range of tasks. Our mental capacities seem much more “general” than that of all the artificial systems we’ve ever created. Those who are trying to improve such systems have long wondered: what is the secret of human general intelligence? In this post I want to consider we can learn about this from fact that the brain evolved. How would an evolved brain be general?
A key problem faced by single-celled organisms is how to make all of their materials and processes out of the available sources of energy and materials. They do this mostly via metabolism, which is mostly a set of enzymes that encourage particular reactions converting some materials into others. Together with cell-wall containers to keep those enzymes close to each other. Some organisms are more general than others, in that they can do this key task in a wider range of environments.
Most single-celled organisms use an especially evolvable metabolism design space. That is, their basic overall metabolism system seems especially well-suited to finding innovations and adaptations mostly via blind random search, in a way that avoids getting stuck in local maxima. As I explained in a recent post, natural metabolisms are evolvable in part because they have genotypes that are highly redundant relative to phenotypes: many sets of enzymes can map any given set of inputs into any given set of outputs. And this redundancy requires a substantial overcapacity; the metabolism needs to contain many more enzymes than are strictly needed to create any given mapping.
The main way that such organisms are general is that they have metabolisms with a large library of enzymes. Not just a large library of genes that could code for enzymes if turned on, but an actual large set of enzymes usually created. They make many more enzymes than they actually need in each particular environment where they find themselves. This comes at a great cost; making all those enzymes and driving their reactions doesn’t come cheap.
A relevant analogous toy problem is that of logic gates mapping input signals onto output signals:
[In] a computer logic gate toy problem, … there are four input lines, four output lines, and sixteen binary logic gates between. The genotype specifies the type of each gate and the set of wires connecting all these things, while the phenotype is the mapping between input and output gates. … All mappings between four inputs and four outputs can be produced using only four internal gates; sixteen gates is a factor of four more than needed. But in the case of four gates the set of genotypes is not big enough compared to the set of phenotypes to allow easy evolution. For [evolvable] innovation, sixteen gates is enough, but four gates is not. (more)
Note that evolution doesn’t always use such highly evolvable design spaces. For example, our skeletal structure doesn’t have lots of extra bones sitting around ready to be swapped into new roles in new environments. In such cases, evolution chose not to pay large extra costs for generality and evolvability, because the environment seemed predictable enough to stay close to a good enough design. As a result, innovation and adaptation of skeletal structure is much slower and more painful, and could fail badly in novel enough environments.
Now let’s consider brains. It may be that for some tasks, evolution found such an effective structure that it chose to commit to that structure, betting that its solution was stable and reliable enough across future environments to let it forgoe the big extra costs of more general and evolvable designs. But if we are looking to explain a surprising generality, flexibility, and rapid evolution in human brains, it makes sense to consider the possibility that human brain design took a different path, one more like that of single-celled metabolism.
That is, one straightforward way to design a general evolvable brain is to use a extra large toolbox of mental modules that can be connected together in many different ways. While each tool might be a carefully constructed jewel, the whole set of tools would have less of an overall structure. Like a pile of logical gates that can be connected many ways, or metabolism sub-networks that can be connected together into many networks. In this case, the secret to general evolvable intelligence would be less in the particular tools and more in having an extra large set of tools, plus some simple general ways to search in the space of tool combinations. A tool set so large that the brain can do most tasks in a great many different ways.
Much of the search for brain innovations and adaptations would then be a search in the space of ways to connect these tools together. Some aspects of this search could happen over evolutionary timescales, some could happen over the lifetime of particular brains, and some could happen on the timescale of cultural evolution, once that got started.
On the timescale of an individual brain lifetime, a search for tool combinations would start with brains that are highly connected, and then prune long term connections as particular desired paths between tools are found. As one learned how to do a task better, one would activate smaller brain volumes. When some brain parts were damaged, brains would often be able to find other combinations of the remaining tools to achieve similar functions. Even losing a whole half of a brain might not greatly reduce performance. And these are all in fact common patterns for human brains.
Yes, something important happened early in human history. Some key event changed the growth rate of human abilities, though not immediate ability levels, and it did this without much changing brain modules and structures, which remain quite close to those of other primates. Plausibly, we had finally collected enough hard-wired tools, or refined them well enough, to let us start to reliably copy each others’ behaviors. And that allowed cultural evolution, a much-faster-than-evolutionary search in the space of practices. Such practices included choices of which combinations of brain modules to activate in which contexts.
What can this view say about the future of brains? On ems, it suggests that human brains have a lot of extra capacity. We can probably go far in taking an em that can do a job task and throwing away brain modules not needed for that task. At some point cutting hurts performance too much, but for many job tasks you might cut 50% to 90% before then.
Regarding other artificial intelligence, it suggests that if we still have a lot to learn via substantially random search, with no grand theory to integrate it all, then we’ll have to focus on collecting more better tools. Machines would gradually get better as we collect more tools. There may be thresholds where you need enough tools to do a certain jobs well, and while most tools would make only small contributions, perhaps there are a few bigger tools that matter more. So key thresholds would come from the existence of key jobs, and from the lumpiness of tools. We should expect progress to be relatively continuous, except perhaps due to the discovery of especially lumpy tools, or to passing thresholds that enable key jobs to be done.