Imagine that you want to untangle a pile of cables. It wasn’t tangled on purpose; tangling just resulted naturally from how these cables were used. You’d probably look for the least tangled cable in the least tangled part of the pile, and start to work there. In this post I will argue that, in a nutshell, this is how we are slowly automating our world of work: we are un- and re-tangling it.
This has many implications, including for the long-term future of human-like creatures in a competitive world. But first we have a bit of explaining to do.
Complex systems are at bottom all made of simple parts; what ultimately makes systems complex is their network of connections. When small things are clumped into larger things, we can distinguish internal from external complexity; something can be complex on the inside even when it is simply connected to other things at its level. We can also distinguish surface complexity from neighborhood complexity; something can be simply connected to its immediate neighbors, and yet still be connected to an especially complex local network region.
When systems do tasks, what makes them complex is their network of task dependencies: two tasks are more connected when they are more interdependent, i.e., when their doing must be coordinated more carefully, in more detail. Let me call tasks “tangled” when they sit within highly connected regions of task dependencies.
We can include the non-task world as part of the network of dependencies; some tasks must be coordinated with complex details of that world. When these dependences are included in our network, we can see that even the tasks we do without coordinating with other people or software can be “tangled” in an important sense. Tangling in this broader sense is a big part of what makes tasks hard.
We know many things about the tangling of tasks. In software, we know that tangling is the main obstacle to making big systems. (Note that I’m lumping many different kinds of artificial code together as “software”.) Tasks and connections must be designed carefully to be “modular”, i.e., to cut tangling. Systems tend to be organized in the direction of trees, with the “root” control parts of such trees more tangled and harder to change. Shared utilities and interface standards also tend to be more tangled and harder to change. It is usually worth paying more for better people to be more careful when write more tangled code.
Among systems that have roughly the same parts and are equally tangled, better “integrated” ones are more useful. Our more celebrated systems tend to be better integrated. For example, when one “abstracts” from a common pattern found in several sections of software code, that pattern becomes a new task which is now connected to those old tasks. While the code is now more connected, it can also be more usefully changed. Also, as systems adapt to changing context, they tend to get more tangled, and harder to usefully change. Sometimes such systems are “refactored” to become better integrated and more easily changed. But even then, they eventually get so fragile that one is tempted to redesign and build them from scratch.
The human economy of job tasks is also tangled. As in software, we tend to clump tasks into jobs, jobs into teams, teams into divisions, and products into firms, all to keep more tangled task sets together in scopes where coordination is cheaper. It costs more to coordinate between people than within one person’s head, and more across divisions than within them. It costs more to coordinate further away in spatial and social distance, especially with foreign nations. Firms use tangling to decide whether to make or buy things, and which divisions to acquire or divest.
Management tasks tend to be more tangled, as are tasks done at larger firms. Tasks tied to law, regulation, and government can be especially tangled. More-tangled-on-average tasks include judging quality, determining compliance, making decisions, thinking creatively, developing strategies, scheduling and planning, interpreting, communicating, making and maintaining relations, selling, resolving conflicts, coordinating, training, motivating, advising, and administration. Less-tangled-on-average tasks include monitoring, identifying, estimating, handling and moving objects, operating and controlling machines and processes, using computers, drafting and specifying devices, and equipment repair and maintenance.
People who do tangled tasks tend to get paid more, especially in larger organizations. (People who are have more tangled social connections are also seen as higher status.) Knots of tangled tasks are harder to change, requiring larger and more expensive reorganizations. The most tangled tasks also tend to be done in the largest cities, and toward the centers of those cities, where people are paid the most. The most tangled products tend to be exported from the most tangled nations.
When doing a task, the human brain typically draws on many brain regions. Some tasks draw on more than others. Different regions implement different tools, and compared to most familiar software, the brain has a very wide range of tools at its disposal. Even so, evolution was limited in how many tools it could build into a brain, because brain volume was limited.
Evolution was less limited in how well it could integrate brain tools; it could search long and hard for better ways to connect its limited set of tools. And our fluid flexible behavior suggests that the human mind integrates its wide range of tools very well. Some even go so far as to call our minds “general”, though of course we seem to be pretty bad at many tasks.
The most tangled brain regions are two key networks, one that manages attention, working memory, decision-making, and another that manages mind-wandering, long-term memory retrieval, and self-reflection. Also, when comparing brains to ordinary computers, brain volume better connotes the computing resources devoted to a tool than it does the lines of code complexity of that tool.
Once we had artificial computers, we could use them to “automate” human mental tasks. (Of course, computers influence jobs and tasks in many other ways than via automation.) Oft-done tasks can more easily justify spending the key fixed costs of writing needed software, and of adapting neighboring tasks to changes in this one. A key marginal cost was for hardware to execute that code. Sometimes refactoring whole sets of related tasks helps to enable automation.
Before artificial computers, humans were basically the only computers available, and so they had to be used for all needed tasks, even those requiring only a tiny fraction of human capability. These simplest tasks were the first to be automated. Then as hardware got cheaper and we could afford to spend more on software, we worked to automate more complex tangled tasks. These are tasks that use more of the tools within each brain, that have a lot of complex internal structure, and that must be coordinated in more detail with a complex non-human or human world.
So far, humans have been limited in their competition with automation because their brain software has been stuck inside its brain hardware. Compared to today’s artificial hardware, human brain hardware is good at memory and parallel computation, but terrible at communication with outside systems. If our brain hardware remains stuck while artificial hardware keeps getting better, it seems that eventually everything must be done better and cheaper artificially.
However, eventually brain emulations (ems) should be possible. Then human software can use artificial hardware and compete on more equal terms. And once we understand more about human software, we’ll be able to change it at least somewhat. At that point, the question for each task will be: is this task better done by a descendant of human software, by a descendant of artificial software made via some process recognizably like how we now make software, or by software made via some other process?
Some seem to think the answer is obvious: descendants of human software must always lose. After all, human software was designed to perform in an obsolete forager environment, using limited brain hardware with poor communications. How could that possibly win against shiny new software? So they think humans must mainly plan how to retire gracefully from work, while somehow retaining an iron fist of control over systems much more capable than they.
But this ignores the very real existence of long-lived legacy systems. The world is full of large long-lived complex systems deeply tangled with other systems. Often there are long insufficient incentives for anyone to redesign them from scratch, instead of incrementally adapting them to new circumstances. Why can’t human minds be such legacy systems?
Now I’m not trying to make any grand claims that I know in detail about the kind of software that will be most competitive in a trillion years. I’m instead saying that we should keep an open mind about the long-term advantages and disadvantages of descendants of human brain software, relative to future competitors.
Yes, our brain software has the disadvantages of being designed to behave in a long obsolete environment, using limited hardware with poor communication. Its designer didn’t even leave us documentation or a test suite. But human brain software also has two huge advantages.
First, human brains are the existing installed system to which a great many other systems have long been adapting, and with which they have been becoming deeply entangled. A great many tools and standards are designed with our brains in mind.
So systems that descend from human brains may retain the two key networks in our brains, even if new mind modules connect to those networks. Those descendants may talk to each other via a recognizable descendant of natural language, even if increased communication bandwidth allows those languages to be far more powerful. They may make agreements with each other using a recognizable descendant of familiar contract law, even if their law becomes much more flexible and powerful. And so on.
Our second big advantage is that human brains contain a wide set of mental tools that are very well integrated, far better than most all the software that we have ever created. We have worked long and hard, and with varying success, to create software substitutes for each of the capacities we have seen our brains perform. But we really have little idea how to put that all together into a well-integrated whole.
When organizations write software, that software is less well integrated that software written by a single person. And the structure of such software tends to reflect the communication structure of the organization that writes it. This suggests that when we put software that we write to the task of writing more software, that further software will probably be even less well integrated than the software that we write directly. Our strong mental integration probably helps us to write more integrated software.
Human minds now do most tasks in the massive network of interconnected tasks that is our civilization. But we are slowly automating those tasks, starting with the tasks that rely more on the least tangled parts of our minds, and that are the least tangled with other tasks done by ourselves, our co-workers, our software tools, and with the larger non-task world.
The disadvantage that artificial software is less well integrated than human brains is the most tolerable for these least tangled tasks. But this disadvantage will come to matter more as we try to automate our more tangled tasks. Even when we can create substitutes for all of the tools in a human brain, we may still struggle to create integrate systems containing those tools. And until we learn to integrate software well, we may continue to have to throw away our large systems as they become too fragile to adapt well, and continue to rely on human-like minds to design at least somewhat integrated systems.
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.