Imagine that you want to untangle a pile of cables. It wasn’t tangled on purpose; tangling just resulted naturally from how these cables were used. You’d probably look for the least tangled cable in the least tangled part of the pile, and start to work there. In this post I will argue that, in a nutshell, this is how we are slowly automating our world of work: we are un- and re-tangling it.
You replied to the wrong thread by accident. Please delete this comment and comment on the correct article's thread.
where you able to find a job after fullstack?
I have no idea if this is optimal but when untangling a bunch of cords my instinct has always been to loosen the largest and most complex knot first with the idea that it is preventing the most other cords from freely moving.
Personally, I find myself stretching to really grasp these ideas, and I find myself looking for concrete examples. I know it would have made the article significantly longer, but I just wanted to provide my data point as someone who would have benefited from them. I may not be the target audience, but http://lesswrong.com/lw/kh/... may also be applicable.
1. TGGP, I agree with you, at least in general terms. And I also see no reason that this would impose an upper bound to the growth in number of human bureaucrats. Some humans (perhaps even all) will ultimately report to robot bosses within the bureaucracy. Bear in mind that even if the robots can do bureaucratic jobs better, we humans will still be nearly impossible to fire; after all, it's a bureaucracy!
2. Further nomenclature: May I suggest we rename B as the "Byzantropy" of the social system? Now this new law can be expressed as: "The Byzantropy in any finite social system increases." There. I like the ring of that. Best regards.
Yes, human legacy might endure via AI assigning themselves proper names like Steve (just like cell phones have area codes and same length numbers) and transacting via contracts. Depending on the definition of 'self' this may be enough.
Think on any kind of sufficiently long timeframe humans (or EMs) need to play a marginally constructive role to survive, not just be there because of legacy. You can see this more and more with seniority in the workplace.
Our legal systems have inherited a lot from medieval European law, and today's phone systems have inherited a lot from the old landline systems.
Especially with the creative, novel, and original. Enhancements seem likely even as we progress in understanding the entanglement.
Legacy systems do die though, eventually (mainframe computers for example). There was a lot of integration in the feudal courts of medieval Europe, around the land ownership structure. Then industry came along and now the monarchs and hereditary titles are a comical sideshow while the entirety of power shifted to a different set of people with very little overlap. You could argue that landline phone companies were the most tangled businesses around (not just literally) -- everything had to go through the "last mile" they controlled. Yet I don't know anyone with a landline account now.
Extend the time frame slightly and there is no hope for humans.
"Emulations are inevitably far less efficient than native code." I don't understand where this comes from. You'd of course emulate as efficiently as possible.
Bureaucrats follow rules, and rules are actually prone to automation.
Cryptocoin alternatives to banking are indeed searching for the least tangled applications to get a foothold. With such a foothold, they'll try to push their way into more tangled financial applications.
I think our economic viability depends on the nature of future markets. What humans and human-ems will always be good at is social and literary stuff. That's what much of our higher cognitive functions seem to be optimized for. If that resembles a product that's valuable in the future, we might be OK for a while. But I have some doubts.
EMS might become the primary customers for those kind of products: Movies, music, novels, comedy, etc. But since in the em future they are barely clinging on to existence, and already-made creative stuff will be quite sufficient for their scare leisure time, they will probably not support a very large creative-social industry. The big disadvantage of ems will be cpu efficiency. Emulations are inevitably far less efficient than native code. Emulating something as intricate as a brain might be as CPU-intensive as running a billion very advanced AIs. If future markets demand innovation in science and engineering, a billion of these streamlined AIs might turn out to be far better than a merely human mind. I mean, we can do science, but we're definitely not optimized for it. If the economic value of a billion dumb AI scripts is higher than that of one emulated brain, and yet they use equivalent resources, who would rent their computer to an em rather than AIs?
When people become economically obsolete, we don't die. When ems become economically obsolete, they can't afford the cost of running on a computer. This means they do die. And if enough die, the value of em-directed services decreases. If in turn those service providers were ems, they will die too. Ems will not only compete with each other, but also with dumb scripts, which start off with a huge efficiency advantage. They will be far less flexible, but I would bet that brilliant em coders will work very hard to engineer in this extra flexibility into AIs. These ems may be among the last to die.
I'm not sure if the same argument could not be used as a defense of banks against crypto. Banks are complex had many cycles and have several symbiotic structures attached to them. We are untangling them from easy to hard, and people begin to project they'll die. So where does the analogy break?
I find your analysis insightful, articulate, and very persuasive, as usual. Tangled tasks or not, I no longer fear that robots will ever be able to eliminate human jobs. This is because, independently, I have discovered a powerful new law of social science! You see, in accordance with my new law, the primary job category for nearly every (99%+) human, in the sufficiently-far future, will be what we now call "bureaucrat." The unique characteristic of jobs held by bureaucrats (vs. all other types of jobs) is that no matter how many bureaucratic jobs a society has, more can always be added. So here's my proposed law, in a nutshell: "The number of employable bureaucrats in any finite social system increases." Or, if you prefer, dB/dt > 0, where B = # of bureaucrats. (It's a lot like the 2nd law of thermodynamics.) If humans can simply continue to exist, then AI, emulations of human minds, and any or all other minds in the universe will never be able to halt the expansion of human-filled bureaucratic jobs. It's a law of nature, I tell you.:-)
The third big advantage is that humans are more than brains. We are also bodies that benefit from hundreds of millions of years of evolutionary tuning of our perceptual and motor systems. Cf. James J. Gibson (1979), The Ecological Approach to Visual Perception.