Humans Cells In Multicellular Future Minds?

In general, adaptive systems vary along an axis from general to specific. A more general system works better (either directly or after further adaptation) in a wider range of environments, and also with a wider range of other adapting systems. It does this in part via having more useful modularity and abstraction. In contrast, a more specific system adapts to a narrower range of specific environments and other subsystems.

Systems that we humans consciously design tend to be more general, i.e., less context dependent, relative to the “organic” systems that they often replace. For example, compare grid-like city street plans to locally evolved city streets, national retail outlets to locally arising stores and restaurants, traditional to permaculture farms, hotel rooms to private homes, big formal firms to small informal teams, uniforms to individually-chosen clothes, and refactored to un-refactored software. The first entity in each pair tends to more easily scale and to match more environments, while the second in each pair tends to be adapted in more detail to particular local conditions.

The book Seeing Like a State describes how states often impose more general systems in order to help them tax and monitor locals, replacing a previous variety of systems of law, language, names, etc. Human minds start out general and flexible when young, and become more specific and inflexible as they age. Large software systems tend to evolve over time from general to specific. At first, the developers of large software systems better understand their architectures, and can more easily change them, even if users are less satisfied with specific system features. Later on, such systems contain more user-requested features, but have architectures that are less well understood or changeable.

More specific systems are more at risk from big changes to their environment, but with only modest environmental variation they tend to be better adapted to local conditions. That is, most successful biological and cultural systems in our world are not very general. Specific systems have even stronger advantages when a set of systems adapt together to each other. When environmental changes remain modest, such sets of mutual adapted systems can entrench themselves indefinitely; to win, competitors must replace the entire set of systems with new variations.

Consider the example of biological cells. For eons, cells faced the world individually, and evolved complex interdependent sets of subsystems to deal with this difficult task. The sharing of cell part designs created pressures for designs to be somewhat general; designs that could work in more situations could be more widely shared. Even so, cell subsystems tended to become well adapted to each other, and the whole set of standard cell designs has become rather entrenched.

The cells in the human body vary by a factor of at least one hundred thousand in volume. This shows that standard cell designs embody substantial generality with respect to cell size. Yet even this generality has its limits; it was apparently very hard to stretch standard cell designs to create single-cell organisms good enough to compete with the familiar large organisms we see in our world. Evolution instead opted to create multicellular organisms — many small cells grouped together to create a single large unit.

Pause to notice the enormous waste involved in this choice. Each cell in a multicellular organism redundantly retains most of the features needed to exist as a single cell creature in a hostile world, even though it no longer lives in such a hostile world. It has its own barrier against the world, and is careful to control what goes across this barrier. It has its own sensors to detect dangers and opportunities outside, and a full range of local manufacturing abilities. Instead of taking advantage of the sort of production scale economies that are central to our industrial economy, each cell makes almost everything for itself!

But if you think that a strongly competitive environment couldn’t possibly tolerate such inefficiency, then you just don’t appreciate the titanic power of entrenched systems. Over eons, standard cell designs became a very well honed and oiled machine, with thousands of parts all carefully designed to fit well with each other. To create a similarly effective large organism that isn’t built out of many small cells, evolution would have to mostly start over and search a very long time in the space of designs for much larger systems. Yes, eventually it might find much better designs, but before then it might have to search as nearly as long and hard as it had previously searched to find small cell designs. So far, that has just been a bridge too far for biological evolution. Far too far. For a half billion years, evolution has much preferred the small-cell bird in the hand to the new-big-organism bird that might be found after searching an astronomical-sized bush.

Now consider the future prospects for human minds if they compete as workers with other kinds of software. Assume that we will eventually find a way (as with ems) to extract the software in human minds from the hardware in which it is now embedded, so that human mind software faces no hardware advantages relative to other kinds of software. Given this assumption, the question becomes: how effective is human mind software relative to other kinds of software in accomplishing future mental/computational tasks?

Some think it obvious that because human minds evolved to win in a distant past environment, they couldn’t possibly win in a different future environment. But this same logic would also conclude that small single cells couldn’t possibly win when biological evolution selects for larger organisms. It ignores the possibility that human minds may be valuable carefully-honed packages of interdependent systems resulting from a vast evolutionary heritage. The future might not be willing to fund the enormous search required to find something very different and better. At least during a future era that lasts long enough to have an importance comparable to the last half billion years of multicellular animals.

Human brains are “general” in the sense of being able to do a rather wide range of tasks moderately well. However, they don’t seem to achieve this via a consistent design “generality” of the sort discussed above. Compared to the software that we humans write, the software in our brains is in many ways less general, abstract, and modular. In our brains, events are poorly synchronized, hardware is mixed up with software, memory is mixed up with processing, addresses are mixed up with contents, and doing is mixed up with learning; this doesn’t happen in the more modular better abstracted systems we design. While our brain has distinguishable subsystems, these subsystems are far more interconnected and less modular than in typical software systems.

When evolution honed the human brain, and its animal brain ancestors, it faced strong space limits. Most software was tied to dedicated hardware, and brains could only hold a limited amount of hardware. When we humans write software, in contrast, we quickly achieve modest competence via abstraction and modularity, which is helped by our having plenty of space to store software separately from hardware when not in use. When we want software to do a new thing, we mostly just write a new tool to do it. But brains had to instead make do with continuing over a very long time to change and reintegrate its existing tools.

The net result is that, compared to familiar software, the human brain is a marvel of highly integrated tools, each of which is useful in many task contexts. But this integration came at great cost in evolutionary search, and these subsystems are now highly entrenched and entangled with each other, and with supporting social systems. So like the carefully honed cells in multicellular animals, future competitive minds may prefer to often reuse human brains, modified to the modest degrees possible in such a huge tangled legacy software system that no one understands well. Not everything in a multi cellar animal is a small cell; there are bones and blood fluids, for example. But most of it is cells.

One big disadvantage of the integrated non-modular brains is that you have to devote an entire brain to do most any task, even very simple tasks. For the last century, we have found humans doing many tasks that could also be done by rather simple and cheap combinations of hardware and software. For obvious reasons, we have automated these tasks first. But eventually we will run out of tasks that can easily be done by computers much smaller than human brains. At that point we will face a less obvious choice: give the task to some variation on a well-integrated human mind, or write a big pile of software to do it.  Or a variation on these, such as software written by software.

Humans won’t always win that contest, but it seems plausible that for a long time they may often win. Human-like minds will probably win more often in tasks that are highly tangled with other tasks that such minds do. This includes tasks like law, marketing, regulation, and planning, and meta-tasks, such as in management and governance. When human-like minds are modified, their most highly tangled networks for conscious thought and mind-wandering may change the least, at least for a long time. In this sort of world, minds very different from humans may not often be given tasks with a wide enough scope of action to be very dangerous. While this future could be very strange to actually see, it might still be less strange than many of you feared that it might be.

GD Star Rating
Tagged as: ,
Trackback URL:
  • arch1

    In a world of competing systems made of modules which start out human-brain-like, wouldn’t one natural thing be to specialize the modules more and more (ie to a greater extent than existing human minds can specialize)?

    • I’m not following you; care to rephrase?

      • arch1

        I think you’re saying that there will be competing systems made of interacting pieces, and in the most tangled parts of these systems those pieces may for some time be and remain pretty human-brain-like.

        I question how long they could remain human-brain-like, given the advantages of specializing such pieces and the (presumably) greater *ability* to specialize AIs than to specialize (train, educate) actual human brains.

      • All known entrenched systems also have advantages from specializing. So I don’t see how that issue in more important here than most of those other contexts. For example, consider the example of cells in multicellular organisms.

      • arch1

        I find this response convincing but because of the SW system analogy, not the multicellular one (in which evolution is constrained to local moves in genome space). I think the reason I raised the objection in the first place is that I misinterpret ‘entrenched’ to imply a long stretch of real time rather than of subjective time as I think you have been emphasizing. IOW: old habits die hard:-)

  • Pingback: Rational Feed – deluks917()

  • You make good arguments about the importance of flexibility in the component elements in the system but the difference between humans and other kinds of software isn’t merely flexibility.

    In particular, another thing one sees in biological systems is that cells working as part of a multicellular organism often behave very differently than single celled organisms. Sure, part of that is a degree of specialization but the big difference to me seems to be the degree to which cells in multicellular organisms give up the normal behaviors and interests of single cellular organisms. Indeed, human cells routinely kill themselves based on singles from their surroundings and they accept the orders/directions of their neighbors while giving up the dispositions to find their own food, avoid predation etc..

    Now, I happen to believe we can do far more than modest modification and if ems happen we will see extreme modification along similar lines. However, given your belief that only modest modification is possible isn’t the greater ability of other software systems to put aside individual interests and self-protective behaviors to work for a larger collective a counterbalancing consideration?

    • I agree that human “cells” in future minds would be modestly modified. But I see a great deal of putting aside of individual interest as well within the range of modest modification. We already do so within existing collectives.

  • Interesting points.

    Though the evolution comparison might not hold, since the difficulty in escaping local maxima is the classic limitation of evolution: it can only make changes in a series of small steps where every step is an improvement. Intelligent designers have a much greater ability to step back and come at a problem from a different angle.

    • ALL optimization systems have troubles with local maxima. Some might do better at it than others, but it is always an obstacle.

  • Silent Cal

    human mind:multicellular future minds::human body:factory?

    How many jobs are there in a factory that require a human body, rather than a human mind? I’m pretty sure we can build hardware physically capable of most anything a human is (but our software is nowhere near as good as the cerebellum).

  • Dave Lindbergh

    Not really disagreeing, but some counter points:

    Biological evolution is famous for being bad at getting out of local maxima – there is no “big picture” intelligence to see the potential gain of doing so and working toward it.

    While evolution will never go away, it will be assisted in intelligent software systems by intelligence to make those leaps more easily.

    Yes, entrenched systems are difficult to displace, but less so with farsight and catastrophe. For example, one explanation of the post-WW2 German economic miracle is that the Allies not only destroyed Germany’s capital stock, they also destroyed Germany’s entrenched legal and economic systems – allowing a new, more efficient, system to take its place.

    Finally, it’s not clear to me that multicellular organisms are as sub-optimal as you seem to think. Molecular manufacturing doesn’t benefit from scale very much, and multicellularity offers considerable redundancy benefits. Not sure there are analogous benefits to small human minds.

    • As I commented below, intelligence can make it easIER to get out of local maxima, but it doesn’t at all make it easY. Entrenchment continues with systems managed by human level intelligences, and it will continue even with superintelligence. Standard chemical engineering calls suggest that molecular manufacturing can indeed benefit from scale.

  • Does entrenchment count as an instance of market failure?

  • Paul Christiano

    One quantitative disanalogy: the next hundred years of human history will involve more economic work than all of preceding human history, and so “redoing” previous work is not so expensive. By contrast, once multicellular organisms exist evolution has already put in a huge amount of effort and total effort is growing slowly, so reproducing that effort would take a very long time. So I don’t see the same kind of entrenchment as being nearly as plausible for humans during our current period of rapid growth (it may become much more plausible as we start spreading slowly through the stars).

    Biological work can be entrenched for a long time, until economic work catches up with evolution in an area (after which it won’t take too long before biology’s work is negligible, if we are still in the rapidly growing phase). But that’s not what you are talking about here, and entrenchment of social systems and culture and so on seems like a different phenomenon with different causes.

    • It sounds like you are saying that optimization processes will soon be so powerful that they can easily redo all of the optimization effort of biology since life began, including optimization of brain organization and design. I’m instead suggesting much less power for near future optimization. There is also the coordination problem that small parts of the world can be better off matching existing standards, even if a coordinated world might be better off redoing all the design from scratch.

      • Paul Christiano

        I mostly meant that total economic output over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past. This is a disanalogy between the situation of human designers and evolution, suggesting that we may have less need to reuse parts.

        I agree that early in history we will want to steal as much from biology as possible, and don’t have strong views about when that period ends (but don’t think the analogy to cells has much to say about that question).

      • I’m not sure “the next 100 years” is a useful unit of analysis. I prefer doubling times of the world economy as a unit, and expect the future to be very hard to see after enough doublings. So I’m trying to see into the early AI era where feasible, but not expecting to be able to see the whole thing.

        As with cells in large biological organisms, surely during the early AI era there will be a strong temptation to use existing legacy systems that work, even if they are hard to greatly refactor. Sure in the very long run such legacies may perhaps have declining influence, but what reason do we have to think we can talk sensibly about such distant futures? We certainly know now of many legacy designs that have lasted a very long time so far.