17 Comments

I'm not sure "the next 100 years" is a useful unit of analysis. I prefer doubling times of the world economy as a unit, and expect the future to be very hard to see after enough doublings. So I'm trying to see into the early AI era where feasible, but not expecting to be able to see the whole thing.

As with cells in large biological organisms, surely during the early AI era there will be a strong temptation to use existing legacy systems that work, even if they are hard to greatly refactor. Sure in the very long run such legacies may perhaps have declining influence, but what reason do we have to think we can talk sensibly about such distant futures? We certainly know now of many legacy designs that have lasted a very long time so far.

Expand full comment

I mostly meant that total economic output over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past. This is a disanalogy between the situation of human designers and evolution, suggesting that we may have less need to reuse parts.

I agree that early in history we will want to steal as much from biology as possible, and don't have strong views about when that period ends (but don't think the analogy to cells has much to say about that question).

Expand full comment

It sounds like you are saying that optimization processes will soon be so powerful that they can easily redo all of the optimization effort of biology since life began, including optimization of brain organization and design. I'm instead suggesting much less power for near future optimization. There is also the coordination problem that small parts of the world can be better off matching existing standards, even if a coordinated world might be better off redoing all the design from scratch.

Expand full comment

One quantitative disanalogy: the next hundred years of human history will involve more economic work than all of preceding human history, and so "redoing" previous work is not so expensive. By contrast, once multicellular organisms exist evolution has already put in a huge amount of effort and total effort is growing slowly, so reproducing that effort would take a very long time. So I don't see the same kind of entrenchment as being nearly as plausible for humans during our current period of rapid growth (it may become much more plausible as we start spreading slowly through the stars).

Biological work can be entrenched for a long time, until economic work catches up with evolution in an area (after which it won't take too long before biology's work is negligible, if we are still in the rapidly growing phase). But that's not what you are talking about here, and entrenchment of social systems and culture and so on seems like a different phenomenon with different causes.

Expand full comment

I find this response convincing but because of the SW system analogy, not the multicellular one (in which evolution is constrained to local moves in genome space). I think the reason I raised the objection in the first place is that I misinterpret 'entrenched' to imply a long stretch of real time rather than of subjective time as I think you have been emphasizing. IOW: old habits die hard:-)

Expand full comment

Does entrenchment count as an instance of market failure?

Expand full comment

All known entrenched systems also have advantages from specializing. So I don't see how that issue is more important here than most of those other contexts. For example, consider the example of cells in multicellular organisms.

Expand full comment

I think you're saying that there will be competing systems made of interacting pieces, and in the most tangled parts of these systems those pieces may for some time be and remain pretty human-brain-like.

I question how long they could remain human-brain-like, given the advantages of specializing such pieces and the (presumably) greater *ability* to specialize AIs than to specialize (train, educate) actual human brains.

Expand full comment

As I commented below, intelligence can make it easIER to get out of local maxima, but it doesn't at all make it easY. Entrenchment continues with systems managed by human level intelligences, and it will continue even with superintelligence. Standard chemical engineering calcs suggest that molecular manufacturing can indeed benefit from scale.

Expand full comment

Not really disagreeing, but some counter points:

Biological evolution is famous for being bad at getting out of local maxima - there is no "big picture" intelligence to see the potential gain of doing so and working toward it.

While evolution will never go away, it will be assisted in intelligent software systems by intelligence to make those leaps more easily.

Yes, entrenched systems are difficult to displace, but less so with farsight and catastrophe. For example, one explanation of the post-WW2 German economic miracle is that the Allies not only destroyed Germany's capital stock, they also destroyed Germany's entrenched legal and economic systems - allowing a new, more efficient, system to take its place.

Finally, it's not clear to me that multicellular organisms are as sub-optimal as you seem to think. Molecular manufacturing doesn't benefit from scale very much, and multicellularity offers considerable redundancy benefits. Not sure there are analogous benefits to small human minds.

Expand full comment

ALL optimization systems have troubles with local maxima. Some might do better at it than others, but it is always an obstacle.

Expand full comment

human mind:multicellular future minds::human body:factory?

How many jobs are there in a factory that require a human body, rather than a human mind? I'm pretty sure we can build hardware physically capable of most anything a human is (but our software is nowhere near as good as the cerebellum).

Expand full comment

Interesting points.

Though the evolution comparison might not hold, since the difficulty in escaping local maxima is the classic limitation of evolution: it can only make changes in a series of small steps where every step is an improvement. Intelligent designers have a much greater ability to step back and come at a problem from a different angle.

Expand full comment

I agree that human "cells" in future minds would be modestly modified. But I see a great deal of putting aside of individual interest as well within the range of modest modification. We already do so within existing collectives.

Expand full comment

You make good arguments about the importance of flexibility in the component elements in the system but the difference between humans and other kinds of software isn't merely flexibility.

In particular, another thing one sees in biological systems is that cells working as part of a multicellular organism often behave very differently than single celled organisms. Sure, part of that is a degree of specialization but the big difference to me seems to be the degree to which cells in multicellular organisms give up the normal behaviors and interests of single cellular organisms. Indeed, human cells routinely kill themselves based on singles from their surroundings and they accept the orders/directions of their neighbors while giving up the dispositions to find their own food, avoid predation etc..

Now, I happen to believe we can do far more than modest modification and if ems happen we will see extreme modification along similar lines. However, given your belief that only modest modification is possible isn't the greater ability of other software systems to put aside individual interests and self-protective behaviors to work for a larger collective a counterbalancing consideration?

Expand full comment

I'm not following you; care to rephrase?

Expand full comment