14 Comments

Happy to hear of this expert confirmation.

Expand full comment

Examples of code cubed wpould be the following instances of /generated code that are already pretty common:- classes generated from a database schema to represent tables- classes generated from an API specification (e.g. OpenAPI)- classes generated from an XSD Schema to represent XML entities

I would say it is well-known among programmers that these kinds of generated code rot very fast and have to be regenerated every time the schema changes. The generated code is very rarely edited by hand exactly because of this.

Expand full comment

Ohh yes, I very much expect there will be. That's why I think it's implausible to assume the most useful worker will resemble a biological human in desires and behavior. That assumption seems to reflect an idea that there are some kind of hard barriers and we won't see some kind of hybrid or substantial change that causes ems to be very alien and perhaps not even the kind of thing we'd even recognize as being a complete agent (e.g. having a single semi-coherent revealed preference across a wide array of circumstances).

Expand full comment

Yes! Thank you. I tried to formulate this at one point but totally failed. How flexible code is regarding changes to the code base is a different question than how flexible the system instantiated by that code base happens to be.

In other words if I right a ML algorithm the flexibility in execution (I find internal/external confusing since I'd flip them) is a measure of the number of applications that it could usefully be applied to. The code flexibility is how difficult it would be to take that code and change it to add some new feature.

I'd guess the human brain is super inflexible in terms of the code base (evolution is damn slow and messy) and I see no reason to presume the same degree of flexibility in execution couldn't be achieved with less inflexibility in terms of the 'system code'.

Expand full comment

The distinction I was trying to establish is between self-adaption vs. adaption by outside agents (humans). Code rot, as traditionally used means the phenomenon where code becomes increasingly disorganized by accumulation of small changes which breaks down its organization, making it hard to change for humans (as reasoning about it now requires taking all its details into account).

For example, a chess engine would not be able to adapt to the rules og go. However, developers might try to adapt the code base to play go, and this might prove difficult if the code is disorganized, such as e.g by encoding heuristics about specific chess moves directly into the search algorithm. The latter is an example of "code rot", but the former is not (I suggest).

Expand full comment

To be precise, rot means a decreased adaptability. I agree that one can distinguish adaptation to internal vs external changes. Can you summarize any rules of thumb we have that distinguish between the two?

Expand full comment

Thanks for yet another interesting post!

Code rot: You use this term to mean any kind of adaptiveness to change, whether by external or internal means. As a programmer I find this surprising, and I suggest that it is often clarifying to separate between external and internal adaptability, and that "code rot" should be reserved for the former. After all, we can envision a system that is internally adaptive (i.e containing general tools and code for selecting and integrating them flexibly in response to changing conditions), and at the same time externally non-adaptive (i.e it's code is a total mess). Indeed, for programmers external vs. internal adaptability is a well-known tradeoff as it is hard to design for both kinds of adaptability at once. In this usage of the term, the brain code is close to maximally rotten.

Expand full comment

"This is a very important post" - I'm glad someone agrees! :)In Age of Em I talk about em teams as the typical unit of copying. Yes, direct uploads of particular individuals are likely feasible well before "generic uploads". Yes, we'll try to pare down ems to a miniimal most cost-effective unit.

Expand full comment

This is a very important post, Robin!

The very survival of most humans in the next 100 years depends on the economic viability of uploaded human code in competition with various newly created forms of code.

I think that superminds, that is society-like agglomerations of individually sentient minds that do not replicate freely, will come to dominate the world and will subsume or control most of the solar system's available mass. Here the ability to compete with created code will be absolutely indispensable to survival. But even if this does not come true, and only conventional societies of independent replicators are viable in our version of physics, our uploaded selves will need to compete with self-aware AIs and subsentient code to survive.

I generally agree with your assessment that human code has the best chances for survival when playing to its strengths - the ability to have a very general capability to analyze complex situations and cobble together useful behavioral responses without running millions of simulations. So far, humans have been ignominiously beaten in games where millions of simulated trials can be performed quickly, allowing a GAN to amass the equivalent of hundreds of years of player experience. But there are many very important real life challenges where a response is needed to a never-before seen and highly complex situation. The general set of skills for approaching such situations may very well turn out to look like a sentient, self- and world-aware human-like mind, rather than a non-sentient GAN or other neural construct. If this is the case, our uploads would have a long future, as you say.

Still, even under the best circumstances the competition will be stiff. It will come from self-aware AI without any human parts, i.e. written from scratch. It may also come from AI produced by training a blank general upload - a network closely based on the structure of a generic human brain scan without the precise synaptic pattern encoding individual memories. I do think that such generic upload will be available for training before a precise upload of an individual is done.

There are a couple of strategies I can think that might help in surviving in the future. First of all, travel light. Not every synapse in my mind is important for me. It's ok to slice and dice until only a core identity is left. It may turn out to be little more than the indexical information differentiating me from other humans, maybe only a few hundred MB. Secondly, pay your way. If I can use my estate to subsidize the evolution and upgrades to my upload, I have a much better chance to become a productive part of the ems society or the supermind than if I just showed up there hoping to get hired for virtual burger flipping jobs. And thirdly, as I already mentioned, be there as early as you can. The first mover advantage may be extreme in the new world, just as you wrote in the Age of Ems.

One way or another, it will be a very exciting 100 years.

Expand full comment

I'd call that one of those "few parts of artificial code [that] are generated via statistical analysis of large datasets." Or rather that is a kind of language in which such code can be written. So far a pretty small fraction of code is made this way, but a growing part at the moment. Such code tends to be simpler at first, but with time and more usage it will of course get a lot more complex. Our experience with such things doesn't so far tell us much about how this kind of artificial code differs from other kinds.

Expand full comment

How do neural networks trained by gradient descent fit into this picture?

Expand full comment

Another question that isn't considered here much is which tasks are economically relevant. In the limit, you could have clanking replicators, that need a lot of production line work doing, all of which is done by artificial code. Even if only humans can make art, noone wants to buy art.

Given current compute prices and wages, almost all human labor is orders of magnitude cheaper than the artificial hardware time needed to simulate it. However, in your hypothetical, moors law caries on until em capable hardware is really cheap. This leaves several years with biological humans who have a large incentive to automate anything they can. Any task that is performed lots, and for which the artificial code is as effective as human code, will carry on with artificial. A few pieces of Acode that worked, but really slowly, might be replaced by Hcode.

Expand full comment

Why shouldn't there be different kinds of code in the future? Far views tend to induce a perception of fewer future differences, but real worlds have many differences and distinctions. Even if capabilities get better over all, there will still be different design approaches with different relative advantages.

Expand full comment

Do you expect the division between brain code and artificial code to remain for long? Certainly once we have ems and quite likely even without (via brain machine interfaces) we should expect to have hybrid systems that blur these distinctions.

It's hard to imagine what kinds of systems would work well here since we tend to equate what our brains do with what we are conciously aware of but there is no reason why human-machine interfaces or lobotomized em code couldn't make use of the output of our low-level visual processing and pass it to artificial code.

Perhaps in the near future we will even see people paid to simply sit still with their eyes closed while computers pass problems through their low level visual systems.

Expand full comment