Entrenchit Happens

Most artificial systems, made by humans, slowly degrade over time until they become dysfunctional, and are replaced. Such systems rarely change or improve over time, and so are sometimes replaced while still functional, with new improved competitors.

Many systems, such as organisms and some kinds of firms, try to adapt to changing external conditions. But internal damage accumulates and eventually limits their ability to adapt quickly or well enough, and so they lose out to competitors. Empires may also decline due to internal damage.

Some larger systems, like species, nations, languages, and many kinds of firms, face many similar competitors, and rise and fall in ways that seem so random that it is hard to tell if they suffer much from internal damage, including in their ability to adapt to context.

In contrast, other larger systems face no competitors, at least for a long time, even as they are drawn from large spaces of possible systems. Consider, for example, that the community of mathematicians has created a total system of math that hangs together and is stable in many ways, and yet is drawn from a vastly larger space of possibilities. The space of possible math axioms is astronomical, but mathematicians consistently reuse the same tiny set of axioms. One could say that those axioms have become “entrenced” in math practice.

Many other kinds of widely shared systems have few competitors, and yet entrench a set of specific practices drawn from a much larger space of possibilities. Consider, for example, the DNA code, the basic architectures of cells, and standard methods of making multi-cellular organisms. Or consider the shared features of most human languages, legal systems, financial systems, economic systems, and firm organization. Or even of computer languages and computer architectures. In each of these cases most of the world has long shared the same common set of interrelated practices, even though a vastly larger space of possibilities is known to exist and to have been little explored.

Such shared practices plausibly persist because they are just too much trouble to change. As I wrote last year:

When an architecture is well enough matched to a stable problem, systems build on it can last long, and grow large, because it is too much trouble to start a competing system from scratch. But when different approaches or environments need different architectures, then after a system grows large enough, one is mostly forced to start over from scratch to use a different enough approach, or to function in a different enough environment.

In sum, entrenchment (or “entrenchit”) happens. I mention this to suggest that, as per my last post, known styles of software really could continue to dominate for long into the future. Many seem confident that very different styles will arise relatively soon on a civilizational time scale, and then mostly displace familiar styles. But who thinks we will soon see domination by new very different kinds of math axioms, human languages, legal systems, or world economic systems? Why expect more radical change in software than in most other things?

Yes, sometimes new systems really do arise to displace old ones. But you can’t help but notice that while small systems are often replaced, revolutions to replace interlocking sets of common worldwide practices much rarer. And for such systems there are far more proposed and attempted revolutions than successful ones.

GD Star Rating
Tagged as:
Trackback URL:
  • Software only having been around ~60 years might have something to do with it,

    • The software in our brains has been around a lot longer than that. And the software that’s been around for 60 years has been strikingly stable I’d say. Not looking as if it will suddenly change a lot.

      • arch1

        Costs of maintaining increasingly brittle legacy systems, length of planning horizons, interoperability issues, costs of clean-sheet design, emotional attachment/familiarity (for artificial systems, especially user-facing parts) and many other factors affect the lifetime of complex systems. In an AI-dominated world, might some of these factors change so as to favor reduced lifetimes?

      • Not that stable. And we have moved from where memory was a limitation, to speed, to complexity, and it has become much more capable, from text, to strokes, to graphics, to audio/video, and barely broached touch and not even broached smell and taste. Eras await.

  • Patrick Staples

    It seems very difficult to correctly predict how future human-level intelligent software will be structured. It seems easier to correctly predict that it will largely be written in c.

  • In 10,000 years, our AI overlords will run on….Unix

    So you can beat our future overloads by running
    rm -rf /

    This seems sort of like a joke. But there was a big push to replace MS windows with new OS’ during Microsoft’s heydey, which I thought would happen. Then with rise of smartphones, adopting variants of linux, that whole discussion stopped dead in its tracks. So now I would not be surprised to see Unix just live on forever, especially as it had an open sourcish birthing, which gives it more flexibility than many OS. It’s now almost 50 years old, and more entrenched than ever. Anyway, I guess my point here is Unix is definitely a great example of your thesis.

  • Steve Witham

    With computer software, it’s hard to tell because of Moore’s Law: it is easy to store everything from the past, and while processors keep speeding up and adding cores, we can use a fraction of the power to emulate old stuff on new platforms.

  • Pingback: Rational Feed – deluks917()

  • How would you change mathematics?

  • Ronfar

    Most possible mathematical axioms are meaningless and uninteresting, as are most possible sequences of English words. When a new *interesting* set of mathematical axioms is discovered, it’s a big deal: you get things like calculus, set theory, non-Euclidean geometry, topology, etc.

  • Joe

    To what extent does this actually limit the capabilities of those larger systems, though? Seems to me that with sufficient modularity, the system as a whole can be pretty adaptive even when many of its parts are too entrenched (ie interconnected) to be worth changing.

    • The more than an entrenched system can be adaptive, the less reason there is to switch to something else, and they longer it can last.

  • Gunnar Zarncke

    Reminds me of Gall’s Systemantics ( https://en.wikipedia.org/wiki/Systemantics ), in particular the principles
    – “As systems grow in size, they tend to lose basic functions.”
    – “The larger the system, the less the variety in the product.”

    Points that you don’t spell out explicitly but that clearly support your point:
    – “A complex system that works is invariably found to have evolved from a simple system that works.”
    – “A complex system designed from scratch never works…”

    While all of these principles are anecdotally (and often humorous) I think they draw on crucial insights into these kind of complex systems.

  • Will Sawin

    The description of math axioms is not so good an illustration of your point. In mathematics there is a phenomenon where we can interpret one system of axioms within another, so every piece of mathematics done in one system of the axioms carries over to the other. (This is in contrast to physical systems where there is always some cost to interfacing two different systems.)

    Setting aside the issue of axiom systems mathematicians haven’t thought up yet, we understand pretty well which systems can be interpreted within which other systems.

    Instead a much bigger issue is with mathematical concepts and definitions. We define fields of mathematics around mathematical concepts, and study questions that can be simply expressed using those concepts. It is hard to move to new concepts because so much of our previous work is expressed using the old concepts.