A Tangled Task Future

Imagine that you want to untangle a pile of cables. It wasn’t tangled on purpose; tangling just resulted naturally from how these cables were used. You’d probably look for the least tangled cable in the least tangled part of the pile, and start to work there. In this post I will argue that, in a nutshell, this is how we are slowly automating our world of work: we are un- and re-tangling it.

This has many implications, including for the long-term future of human-like creatures in a competitive world. But first we have a bit of explaining to do.

Complex systems are at bottom all made of simple parts; what ultimately makes systems complex is their network of connections. When small things are clumped into larger things, we can distinguish internal from external complexity; something can be complex on the inside even when it is simply connected to other things at its level. We can also distinguish surface complexity from neighborhood complexity; something can be simply connected to its immediate neighbors, and yet still be connected to an especially complex local network region.

When systems do tasks, what makes them complex is their network of task dependencies: two tasks are more connected when they are more interdependent, i.e., when their doing must be coordinated more carefully, in more detail. Let me call tasks “tangled” when they sit within highly connected regions of task dependencies.

We can include the non-task world as part of the network of dependencies; some tasks must be coordinated with complex details of that world. When these dependences are included in our network, we can see that even the tasks we do without coordinating with other people or software can be “tangled” in an important sense. Tangling in this broader sense is a big part of what makes tasks hard.

We know many things about the tangling of tasks. In software, we know that tangling is the main obstacle to making big systems. (Note that I’m lumping many different kinds of artificial code together as “software”.) Tasks and connections must be designed carefully to be “modular”, i.e., to cut tangling. Systems tend to be organized in the direction of trees, with the “root” control parts of such trees more tangled and harder to change. Shared utilities and interface standards also tend to be more tangled and harder to change. It is usually worth paying more for better people to be more careful when write more tangled code.

Among systems that have roughly the same parts and are equally tangled, better “integrated” ones are more useful. Our more celebrated systems tend to be better integrated.  For example, when one “abstracts” from a common pattern found in several sections of software code, that pattern becomes a new task which is now connected to those old tasks. While the code is now more connected, it can also be more usefully changed. Also, as systems adapt to changing context, they tend to get more tangled, and harder to usefully change. Sometimes such systems are “refactored” to become better integrated and more easily changed. But even then, they eventually get so fragile that one is tempted to redesign and build them from scratch.

The human economy of job tasks is also tangled. As in software, we tend to clump tasks into jobs, jobs into teams, teams into divisions, and products into firms, all to keep more tangled task sets together in scopes where coordination is cheaper. It costs more to coordinate between people than within one person’s head, and more across divisions than within them. It costs more to coordinate further away in spatial and social distance, especially with foreign nations. Firms use tangling to decide whether to make or buy things, and which divisions to acquire or divest.

Management tasks tend to be more tangled, as are tasks done at larger firms. Tasks tied to law, regulation, and government can be especially tangled. More-tangled-on-average tasks include judging quality, determining compliance, making decisions, thinking creatively, developing strategies, scheduling and planning, interpreting, communicating, making and maintaining relations, selling, resolving conflicts, coordinating, training, motivating, advising, and administration. Less-tangled-on-average tasks include monitoring, identifying, estimating, handling and moving objects, operating and controlling machines and processes, using computers, drafting and specifying devices, and equipment repair and maintenance.

People who do tangled tasks tend to get paid more, especially in larger organizations. (People who are have more tangled social connections are also seen as higher status.) Knots of tangled tasks are harder to change, requiring larger and more expensive reorganizations. The most tangled tasks also tend to be done in the largest cities, and toward the centers of those cities, where people are paid the most. The most tangled products tend to be exported from the most tangled nations.

When doing a task, the human brain typically draws on many brain regions. Some tasks draw on more than others. Different regions implement different tools, and compared to most familiar software, the brain has a very wide range of tools at its disposal. Even so, evolution was limited in how many tools it could build into a brain, because brain volume was limited.

Evolution was less limited in how well it could integrate brain tools; it could search long and hard for better ways to connect its limited set of tools. And our fluid flexible behavior suggests that the human mind integrates its wide range of tools very well. Some even go so far as to call our minds “general”, though of course we seem to be pretty bad at many tasks.

The most tangled brain regions are two key networks, one that manages attention, working memory, decision-making, and another that manages mind-wandering, long-term memory retrieval, and self-reflection. Also, when comparing brains to ordinary computers, brain volume better connotes the computing resources devoted to a tool than it does the lines of code complexity of that tool.

Once we had artificial computers, we could use them to “automate” human mental tasks. (Of course, computers influence jobs and tasks in many other ways than via automation.) Oft-done tasks can more easily justify spending the key fixed costs of writing needed software, and of adapting neighboring tasks to changes in this one. A key marginal cost was for hardware to execute that code. Sometimes refactoring whole sets of related tasks helps to enable automation.

Before artificial computers, humans were basically the only computers available, and so they had to be used for all needed tasks, even those requiring only a tiny fraction of human capability. These simplest tasks were the first to be automated. Then as hardware got cheaper and we could afford to spend more on software, we worked to automate more complex tangled tasks. These are tasks that use more of the tools within each brain, that have a lot of complex internal structure, and that must be coordinated in more detail with a complex non-human or human world.

So far, humans have been limited in their competition with automation because their brain software has been stuck inside its brain hardware. Compared to today’s artificial hardware, human brain hardware is good at memory and parallel computation, but terrible at communication with outside systems. If our brain hardware remains stuck while artificial hardware keeps getting better, it seems that eventually everything must be done better and cheaper artificially.

However, eventually brain emulations (ems) should be possible. Then human software can use artificial hardware and compete on more equal terms. And once we understand more about human software, we’ll be able to change it at least somewhat. At that point, the question for each task will be: is this task better done by a descendant of human software, by a descendant of artificial software made via some process recognizably like how we now make software, or by software made via some other process?

Some seem to think the answer is obvious: descendants of human software must always lose. After all, human software was designed to perform in an obsolete forager environment, using limited brain hardware with poor communications. How could that possibly win against shiny new software? So they think humans must mainly plan how to retire gracefully from work, while somehow retaining an iron fist of control over systems much more capable than they.

But this ignores the very real existence of long-lived legacy systems. The world is full of large long-lived complex systems deeply tangled with other systems. Often there are long insufficient incentives for anyone to redesign them from scratch, instead of incrementally adapting them to new circumstances. Why can’t human minds be such legacy systems?

Now I’m not trying to make any grand claims that I know in detail about the kind of software that will be most competitive in a trillion years. I’m instead saying that we should keep an open mind about the long-term advantages and disadvantages of descendants of human brain software, relative to future competitors.

Yes, our brain software has the disadvantages of being designed to behave in a long obsolete environment, using limited hardware with poor communication. Its designer didn’t even leave us documentation or a test suite. But human brain software also has two huge advantages.

First, human brains are the existing installed system to which a great many other systems have long been adapting, and with which they have been becoming deeply entangled. A great many tools and standards are designed with our brains in mind.

So systems that descend from human brains may retain the two key networks in our brains, even if new mind modules connect to those networks. Those descendants may talk to each other via a recognizable descendant of natural language, even if increased communication bandwidth allows those languages to be far more powerful. They may make agreements with each other using a recognizable descendant of familiar contract law, even if their law becomes much more flexible and powerful. And so on.

Our second big advantage is that human brains contain a wide set of mental tools that are very well integrated, far better than most all the software that we have ever created. We have worked long and hard, and with varying success, to create software substitutes for each of the capacities we have seen our brains perform. But we really have little idea how to put that all together into a well-integrated whole.

When organizations write software, that software is less well integrated that software written by a single person. And the structure of such software tends to reflect the communication structure of the organization that writes it. This suggests that when we put software that we write to the task of writing more software, that further software will probably be even less well integrated than the software that we write directly. Our strong mental integration probably helps us to write more integrated software.

Human minds now do most tasks in the massive network of interconnected tasks that is our civilization. But we are slowly automating those tasks, starting with the tasks that rely more on the least tangled parts of our minds, and that are the least tangled with other tasks done by ourselves, our co-workers, our software tools, and with the larger non-task world.

The disadvantage that artificial software is less well integrated than human brains is the most tolerable for these least tangled tasks. But this disadvantage will come to matter more as we try to automate our more tangled tasks. Even when we can create substitutes for all of the tools in a human brain, we may still struggle to create integrate systems containing those tools. And until we learn to integrate software well, we may continue to have to throw away our large systems as they become too fragile to adapt well, and continue to rely on human-like minds to design at least somewhat integrated systems.

So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • MouchWesley

    The third big advantage is that humans are more than brains. We are also bodies that benefit from hundreds of millions of years of evolutionary tuning of our perceptual and motor systems. Cf. James J. Gibson (1979), The Ecological Approach to Visual Perception.

  • Robert Koslover

    I find your analysis insightful, articulate, and very persuasive, as usual. Tangled tasks or not, I no longer fear that robots will ever be able to eliminate human jobs. This is because, independently, I have discovered a powerful new law of social science! You see, in accordance with my new law, the primary job category for nearly every (99%+) human, in the sufficiently-far future, will be what we now call “bureaucrat.” The unique characteristic of jobs held by bureaucrats (vs. all other types of jobs) is that no matter how many bureaucratic jobs a society has, more can always be added. So here’s my proposed law, in a nutshell: “The number of employable bureaucrats in any finite social system increases.” Or, if you prefer, dB/dt > 0, where B = # of bureaucrats. (It’s a lot like the 2nd law of thermodynamics.) If humans can simply continue to exist, then AI, emulations of human minds, and any or all other minds in the universe will never be able to halt the expansion of human-filled bureaucratic jobs. It’s a law of nature, I tell you.

    • Bureaucrats follow rules, and rules are actually prone to automation.

      • Robert Koslover

        1. TGGP, I agree with you, at least in general terms. And I also see no reason that this would impose an upper bound to the growth in number of human bureaucrats. Some humans (perhaps even all) will ultimately report to robot bosses within the bureaucracy. Bear in mind that even if the robots can do bureaucratic jobs better, we humans will still be nearly impossible to fire; after all, it’s a bureaucracy!

        2. Further nomenclature: May I suggest we rename B as the “Byzantropy” of the social system? Now this new law can be expressed as: “The Byzantropy in any finite social system increases.” There. I like the ring of that. Best regards.

  • G Diego Vichutilitarian

    I’m not sure if the same argument could not be used as a defense of banks against crypto. Banks are complex had many cycles and have several symbiotic structures attached to them. We are untangling them from easy to hard, and people begin to project they’ll die. So where does the analogy break?

    • Robin Hanson

      Cryptocoin alternatives to banking are indeed searching for the least tangled applications to get a foothold. With such a foothold, they’ll try to push their way into more tangled financial applications.

  • lump1

    I think our economic viability depends on the nature of future markets. What humans and human-ems will always be good at is social and literary stuff. That’s what much of our higher cognitive functions seem to be optimized for. If that resembles a product that’s valuable in the future, we might be OK for a while. But I have some doubts.

    EMS might become the primary customers for those kind of products: Movies, music, novels, comedy, etc. But since in the em future they are barely clinging on to existence, and already-made creative stuff will be quite sufficient for their scare leisure time, they will probably not support a very large creative-social industry. The big disadvantage of ems will be cpu efficiency. Emulations are inevitably far less efficient than native code. Emulating something as intricate as a brain might be as CPU-intensive as running a billion very advanced AIs. If future markets demand innovation in science and engineering, a billion of these streamlined AIs might turn out to be far better than a merely human mind. I mean, we can do science, but we’re definitely not optimized for it. If the economic value of a billion dumb AI scripts is higher than that of one emulated brain, and yet they use equivalent resources, who would rent their computer to an em rather than AIs?

    When people become economically obsolete, we don’t die. When ems become economically obsolete, they can’t afford the cost of running on a computer. This means they do die. And if enough die, the value of em-directed services decreases. If in turn those service providers were ems, they will die too. Ems will not only compete with each other, but also with dumb scripts, which start off with a huge efficiency advantage. They will be far less flexible, but I would bet that brilliant em coders will work very hard to engineer in this extra flexibility into AIs. These ems may be among the last to die.

    • Robin Hanson

      “Emulations are inevitably far less efficient than native code.” I don’t understand where this comes from. You’d of course emulate as efficiently as possible.

  • Pingback: Recomendaciones | intelib()

  • Legacy systems do die though, eventually (mainframe computers for example). There was a lot of integration in the feudal courts of medieval Europe, around the land ownership structure. Then industry came along and now the monarchs and hereditary titles are a comical sideshow while the entirety of power shifted to a different set of people with very little overlap. You could argue that landline phone companies were the most tangled businesses around (not just literally) — everything had to go through the “last mile” they controlled. Yet I don’t know anyone with a landline account now.

    Extend the time frame slightly and there is no hope for humans.

    • Robin Hanson

      Our legal systems have inherited a lot from medieval European law, and today’s phone systems have inherited a lot from the old landline systems.

      • Yes, human legacy might endure via AI assigning themselves proper names like Steve (just like cell phones have area codes and same length numbers) and transacting via contracts. Depending on the definition of ‘self’ this may be enough.

        Think on any kind of sufficiently long timeframe humans (or EMs) need to play a marginally constructive role to survive, not just be there because of legacy. You can see this more and more with seniority in the workplace.

  • Lord

    Especially with the creative, novel, and original. Enhancements seem likely even as we progress in understanding the entanglement.

  • Personally, I find myself stretching to really grasp these ideas, and I find myself looking for concrete examples. I know it would have made the article significantly longer, but I just wanted to provide my data point as someone who would have benefited from them. I may not be the target audience, but http://lesswrong.com/lw/kh/explainers_shoot_high_aim_low/ may also be applicable.

    • Daniel Gomez

      where you able to find a job after fullstack?

      • You replied to the wrong thread by accident. Please delete this comment and comment on the correct article’s thread.

  • Miles Jacob

    I have no idea if this is optimal but when untangling a bunch of cords my instinct has always been to loosen the largest and most complex knot first with the idea that it is preventing the most other cords from freely moving.

  • Pingback: Overcoming Bias : Humans Cells In Multicellular Future Minds?()

  • Pingback: A framework for thinking about AI timescales – Foundational Research Institute()