13 Comments

The description of math axioms is not so good an illustration of your point. In mathematics there is a phenomenon where we can interpret one system of axioms within another, so every piece of mathematics done in one system of the axioms carries over to the other. (This is in contrast to physical systems where there is always some cost to interfacing two different systems.)

Setting aside the issue of axiom systems mathematicians haven't thought up yet, we understand pretty well which systems can be interpreted within which other systems.

Instead a much bigger issue is with mathematical concepts and definitions. We define fields of mathematics around mathematical concepts, and study questions that can be simply expressed using those concepts. It is hard to move to new concepts because so much of our previous work is expressed using the old concepts.

Expand full comment

Reminds me of Gall's Systemantics ( https://en.wikipedia.org/wi... ), in particular the principles- "As systems grow in size, they tend to lose basic functions."- "The larger the system, the less the variety in the product."

Points that you don't spell out explicitly but that clearly support your point:- "A complex system that works is invariably found to have evolved from a simple system that works."- "A complex system designed from scratch never works..."

While all of these principles are anecdotally (and often humorous) I think they draw on crucial insights into these kind of complex systems.

Expand full comment

The more than an entrenched system can be adaptive, the less reason there is to switch to something else, and they longer it can last.

Expand full comment

To what extent does this actually limit the capabilities of those larger systems, though? Seems to me that with sufficient modularity, the system as a whole can be pretty adaptive even when many of its parts are too entrenched (ie interconnected) to be worth changing.

Expand full comment

Not that stable. And we have moved from where memory was a limitation, to speed, to complexity, and it has become much more capable, from text, to strokes, to graphics, to audio/video, and barely broached touch and not even broached smell and taste. Eras await.

Expand full comment

Costs of maintaining increasingly brittle legacy systems, length of planning horizons, interoperability issues, costs of clean-sheet design, emotional attachment/familiarity (for artificial systems, especially user-facing parts) and many other factors affect the lifetime of complex systems. In an AI-dominated world, might some of these factors change so as to favor reduced lifetimes?

Expand full comment

Most possible mathematical axioms are meaningless and uninteresting, as are most possible sequences of English words. When a new *interesting* set of mathematical axioms is discovered, it's a big deal: you get things like calculus, set theory, non-Euclidean geometry, topology, etc.

Expand full comment

How would you change mathematics?

Expand full comment

With computer software, it's hard to tell because of Moore's Law: it is easy to store everything from the past, and while processors keep speeding up and adding cores, we can use a fraction of the power to emulate old stuff on new platforms.

Expand full comment

In 10,000 years, our AI overlords will run on....Unix

So you can beat our future overloads by runningrm -rf /

This seems sort of like a joke. But there was a big push to replace MS windows with new OS' during Microsoft's heydey, which I thought would happen. Then with rise of smartphones, adopting variants of linux, that whole discussion stopped dead in its tracks. So now I would not be surprised to see Unix just live on forever, especially as it had an open sourcish birthing, which gives it more flexibility than many OS. It's now almost 50 years old, and more entrenched than ever. Anyway, I guess my point here is Unix is definitely a great example of your thesis.

Expand full comment

The software in our brains has been around a lot longer than that. And the software that's been around for 60 years has been strikingly stable I'd say. Not looking as if it will suddenly change a lot.

Expand full comment

It seems very difficult to correctly predict how future human-level intelligent software will be structured. It seems easier to correctly predict that it will largely be written in c.

Expand full comment

Software only having been around ~60 years might have something to do with it,

Expand full comment