It has come to my attention that some think that by now I should have commented on Carl Shulman’s em paper Whole Brain Emulation and the Evolution of Superorganisms. I’ll comment now in this (long) post.
Michael, I completely agree. The the idea of intellectual property and patents would have some utility within a superorganism is hard to imagine when all parts would necessarily value the welfare of the superorganism over their own.
That is supposed to be the idea in tribes, where the members of the tribe value the tribe over themselves. This heuristic can break down. It apparently has broken down in the US with certain political factions valuing party over country. It has certainly broken down with AGW denialism where certain people value their own short-term profit over the adverse effects of long term global warming.
Why anyone who values their own personal welfare over any and every larger group somehow can imagine that they could trick a superorganism into thinking they were a loyal subject by sufficient signaling is quite strange. It is obviously false signaling.
True signalers of large group loyalty would value the welfare of all humans the most. Membership in any subset of humans that values the subset more than the whole is obviously comprised of members that cannot be loyal to a larger group because they are not loyal to the largest group. If individual superorganisms are not loyal to the group of superorganisms, then the group is not stable and cannot last long term.
I think what this means is that anyone selfish enough to prioritize their own speculative cryonic preservation over the welfare of large numbers of humans, can't be someone who could be loyal to a superorganism. Why would any superorganism revive an organism that will virtually certainly be disloyal?
I think its remarkable that in an article on "superorganisms," essentially something a lot more controlling of individuals than your average totalitarian dictatorship, the words rooted in "slave" are used only twice, but in a previous article on major league baseball, the thesis of the article seems to be that the highly paid players are slaves because of some of the contractural limitations they have agreed to in order to be paid to play.
On the superorganism idea, I don't think it is shared values at all that are needed to make this thing work, but rather it is a particular set of values. If all ems share the value the "the individual is paramount" then you aren't going to have much of a superorganism advantagte, same if the ems are all psychopaths. Whereas if most of the ems have as a value "I value the superorganism's needs as expressed by this particular command hierarchy more highly than I value my own individual life or desires" then it doesn't matter whether those ems share other values or are diverse in their other values.
Rafal, the property rights institutions I outline are also capable of altering minds and controlling their reproduction, to increase efficiency.
Carl, I can easily see how an autark who rules a political/military regime, and is concerned about internal instability, might find it useful to create many short-lived copies assigned to tasks where loyalty matters much more than skills and experience, or where loyalty matters somewhat and appropriate skills and experience somewhat match the autark's.
From this we should expect such em autarkies to be more stable against internal rebellion or coups, and to be more willing to adopt policies that might otherwise risk rebellion. But this doesn't suggest that such regimes will encompass larger geographic or economic scopes.
However, quotes like the following led me to see you as making much stronger claims about such em clans displacing most other ems in most social and economic niches:
The benefits of a willingness to self-sacrifice would be extremely high for human brain emulations. Specifically, a superorganism of such entitites could realize a much higher level of economic productivity than narrowly self-concerned individuals. …Even a modest productivity advantage for a new lineage of emulations could allow it to outbid competing emulations for resources, rendering the necessities of existence unaffordable for self-concerned emulations dependent on wages. … The combination of competitive dynamics within and between regulatory jurisdictions would thus tend to result in a predominance of emulations fitting our definition of superorganisms.These sound like stronger claims than just "boost[ing] our expectations of future coordination given emulations, aggregating across scenarios."
How coordinated is the U.S?
Carl Shulman is very hard to out-rational. I hope the two of you mix it up in more depth, perhaps in this comments thread.
The ems-based superorganism scenario has two aspects that differentiate it from a human civilization: control of reproduction and ease of altering minds. The feedback loops between system-wide processes and individual human reproduction are long and indirect, which leads to poor (from society's "viewpoint") function - e.g. female hypergamy is a form of individually fit but globally detrimental behavior that maintains antisocial traits in men. Human minds are rife with the effects of such evolutionary process. A superorganism would be defined by its ability to impose direct and effective control over the reproduction of its constituent parts (no longer "individuals"), as Carl describes in his article - and this should produce dramatic effects on the characteristics of its minds. Add to it advanced methods of designing minds and you should have a new quality of entity, minds thoroughly adjusted to social life without the possibility of defection from its goals, without the ever-pervading hypocrisy that Robin so rightly thrust in the center of our attention. These minds should be capable of more than just building a slightly larger organization. Whether there would be direct value-file-sharing, modified individual property rights or other forms of regulation acting within the superorganism, is a technical issue we mere humans are probably incapable of analyzing.
I do think that Robin is underestimating the degree of novelty and the jump in performance inherent in the superorganism. Humans are now at the level of slime molds when it comes to social cohesiveness, the superorganisms will be as cohesive as the cells of an individual human. I would expect the difference in capabilities will be just as dramatic.
TGGP, you correctly suggest that I am trying to attend to Hobbesian state-of-nature dynamics (of the sort that characterize the international system, and some aspects of the control of military and law enforcement agencies) that I think Robin's picture neglects.
Robin, I have limited time in the week before the Singularity Summit, so I won't necessarily be able to have back-and-forth conversation on this before then, but I will try to make a few points now.
The core idea that led me to write that working paper was that an organization with one trusted brain emulation that was loyal to a cause (to the point of copy self-sacrifice) could exhaustively test those loyalties (through experimentation with its copies) and then produce as many copies as needed to perform tasks for which loyalty was important. Further, after each round of testing, if that loyalty was reasonably stable for some period, then any task taking less time could be done by a copy created from a state saved after testing, and deleted after the task to prevent divergence. At the least, this could include operating military equipment, interpreting population surveillance data, and forceful police action. Since in fact regimes regularly fall due to a shortage of loyal security forces, this seemed like something that would be tempting to existing governments (capable of finding or training some candidates with appropriate loyalties), or to new ones created by an emulation "copy-family."
Also, in principle two parties that trusted a third party could exhaustively test and rely on copies of that third party to enforce deals with them: e.g. the different members of a military alliance could entrust their nuclear arsenals to a single force after exhaustively examining the loyalties and motivations of emulations selected and copied to staff the weapons platforms.
I would not place high confidence in particular scenarios involving these capabilities, just as I don't place high confidence in brain emulations coming before other forms of AI, including brain-inspired non-emulation AI, and the language in the paper reflects that. The post misreports me as saying that various things will happen, or are "natural endpoints," when I am instead arguing that these emulation capabilities should substantially boost our expectations of future coordination given emulations, aggregating across scenarios.
Values tend to be more integrated in a single organism’s brain, relative to larger families or species, and in a team or firm, relative to a nation or world.
It is not a coincidence that genetically homogenous cells in an individual organism, which can (save edge cases, e.g. meiotic drive) only boost the reproductive success of their genes via contributing to the functioning of the whole organism, are able to cooperate well, or that genetically close individuals are more cooperative. Humans have been selected to serve these local alliances at the expense of the larger group, only partially offset by self-domestication. I used the superorganism language to highlight the similarities between upload groups with replication controlled through a centralized process based on group behavior and hive insects, colonial organisms, and fully integrated multicellular life.
In general, it is not at all obvious that the benefits of more value sharing are worth these costs. And even if more value sharing is worth the costs, that would only imply that value-sharing entities should be a bit larger than they are now, not that they should shift to a world-encompassing extreme.
States are already very large, with the US and China making up about a third of world GDP, and the next 8 countries making up another third. Most of the distance from forager bands to singleton has already been covered.
On Shulman’s more concrete argument, his suggested single-version approach to em value sharing, wherein a single central em only allows (perhaps vast numbers of) brief copies, can suffer from greatly reduced innovation. When em copies are assigned to and adapt to different tasks, there may be no easy way to merge their minds into a single common mind containing all their adaptations. The single em copy that is best at doing an average of tasks, may be much worse at each task than the best em for that task.
As I said in the paper and in our face-to-face discussions, this is most useful for tasks that must be done many times briefly and separately. Some such tasks seem quite important, e.g. monitoring other brain emulations, acting as security forces, etc. In others more cumulative training would be needed, requiring further testing as I said.
Shulman’s other concrete suggestion for sharing em values is “psychological testing, staged situations, and direct observation of their emulation software to form clear pictures of their loyalties.” But genetic and cultural evolution has long tried to make human minds fit well within strongly loyal teams, a task to which we seem well adapted. This suggests that moving our minds closer to a “borg” team ideal would cost us somewhere else, such as in our mental agility.
Evolutionary pressure was not only on team loyalty, but also on the "homo hypocritus" betrayal of group interests for selfish gain that you often talk about on this blog, leading to an arms race between deception and detection. With that dynamic, an exogenous improvement in technology for detecting and stably copying motivations changes the tradeoffs and can be a straightforward improvement, as with streetlights reducing crime.
Once a property rights system exists, the additional cost of applying it within a set of em copies seems small relative to the likely costs of strong value sharing.
Property rights are enforced by security forces (military and police) and adjudicated (at the top level) by the governments controlling those forces. As I said in the paper, reliance on many copies of ems with tested shared values is a substitute for external enforcement, and so more valuable for the more Hobbesian, such as controlling the security forces expected to enforce the laws, creating new enforcement organizations, or operating with inadequate legacy legal systems. And indeed, the problem of control over the security forces today is one where we see extensive indoctrination, testing and selection for loyalty (e.g. drawing from populations with favorable attitudes towards the government being served). We fairly frequently see regimes survive or fall, and property rights reassigned or not, based on whether security forces choose to support a revolution or coup.
Individual farmers have long faced starvation if they could not extract enough food from their property, and farmers were often out-competed by others who used resources more efficiently.
I agree that given a strong legal system and enforcement mechanisms, we could have your scenario where em templates willingly have copies made, and then those copies are shortly unwillingly slaughtered. Yes, starving peasants and Roman slaves mostly did not rebel, and were mostly massacred when they did. However, there is a separate issue of the motivations of the Roman citizens or medieval knights who did the enforcement, who swore religion-backed oaths of loyalty, cultivated camaraderie, received better treatment to evoke motives of reciprocity, etc.
@RH so that suggests that the real topic should first be "what are the forces that make property right regimes stable right now"? and subsequently "how will those forces change?"
Doh, this comment is for the Grace-Hanson podcasts.
TGGP, I doubt it is a mere coincidence that in the context of an institute devoted to figuring out how to create a good singleton someone wrote a paper about how ems would evolve into near-singletons. Yes, people can and do try to dismiss property-based economics as a mere relic of the farming era. I'd argue for the robustness of economics in also analyzing industry and foraging.
Kaj, your focus is on the feasibility of merging minds, not on the social consequences. I don't have much comparative advantage on analyzing such feasibility.
Very large scale cooperation happens rather naturally - which is why modern human societies typically use a monopolies and mergers commission to prevent such monopolies from rivalling the governments that allowed them to exist in the first place.
Robin: While you're at it, might you be persuaded to say something about my and Harri Valpola's Coalescing Minds paper (currently under review for the International Journal of Machine Consciousness) as well?
Abstract: We present a hypothetical process of mind coalescence, where artificial connections are created between two or more brains. This might simply allow for an improved form of communication. At the other extreme, it might merge the minds into one in a process that can be thought of as a reverse split-brain operation. We propose that one way mind coalescence might happen is via an exocortex, a prosthetic extension of the biological brain which integrates with the brain as seamlessly as parts of the biological brain integrate with each other. An exocortex may also prove to be the easiest route for mind uploading, as a person’s personality gradually moves away from the aging biological brain and onto the exocortex. Memories might also be copied and shared even without minds being permanently merged. Over time, the borders of personal identity may become loose or even unnecessary.
(And now upon re-reading that abstract, I realize that we should possibly have emphasized a bit more that we expect mind coalescence to become very common once uploading is commonplace, regardless of the route.)
You may have borrowed the prison guard analogy from Judith Harris' excellent "The Nurture Assumption", but I don't think it fits. Prison guards don't intend to inspire or even reform prisoners. They merely enforce order and other people, whether the warden, minister or other social worker type are charged with improving prisoners.
This is admittedly a gut reaction, but it irks me to see you argue against Schulman by lumping his argument together with Yudkowsky's and the big-single-god idea. One time at the beginning is alright, but then at the end you say "the Singularity Institute obsession with making a god to rule us all (well) had distracted him from thinking about real em life, as opposed to how ems might realize their make-a-god hopes". Could they not as easily (and lamely) reply that you are distracted by your 20th century economist's obsession with individualist models featuring property rights, contracts etc that didn't exist before agriculture? My recollection of Schulman's prior arguments with you reflected less sci-fi theology than a Hobbesian state of nature colliding with winner-take-all dynamics.
If values are not sufficient for coordination, is the issue information?