Shulman On Superorgs

It has come to my attention that some think that by now I should have commented on Carl Shulman’s em paper Whole Brain Emulation and the Evolution of Superorganisms. I’ll comment now in this (long) post.

The undated paper is posted at the Singularity Institute, my ex-co-blogger Eliezer Yudkowsky’s organization dedicated to the proposition that the world will soon be ruled by a single powerful mind (with well integrated beliefs, values, and actions), so we need to quick figure out how to design values for a mind we’d like. The main argument is that someone will soon design an architecture to let an artificial mind quickly grow from seriously stupid to super wicked smart. (Yudkowsky and I debated that recently.) Shulman’s paper offers an auxiliary argument, that whole brain emulations would also quickly lead to one or a few such powerful integrated “superorganisms.”

It seems to me that Shulman actually offers two somewhat different arguments, 1) an abstract argument that future evolution generically leads to superorganisms, because their costs are generally less than their benefits, and 2) a more concrete argument, that emulations in particular have especially low costs and high benefits.

The abstract argument seems to be that coordination can offer huge gains, sharing values eases coordination, and the costs of internally implementing shared values are small. On generic coordination gains, Shulman points to war:

Consider a contest … such that a preemptive strike would completely destroy the other power, although retaliatory action would destroy 90% of the inhabitants of the aggressor. For the self-concerned individuals, this would be a disaster … But for the superorganisms … [this] would be no worse than the normal deletion and replacement of everyday affairs.

On the generic costs of value sharing, I think Shulman’s intuition is that a mind’s values can be expressed in a relatively small static file. While it might be expensive to figure out what actions achieve any particular set of values, the cost to simply store a values file can be tiny for a large mind. And Shulman can’t see see why using the same small file in different parts of a large system would cost more to implement that using different small files.

Shulman’s concrete argument outlines ways for ems to share values:

Superorganisms [are] groups of related emulations ready to individually sacrifice themselves in pursuit of the shared aims of the superorganism. … To produce emulations with trusted motivations, … copies could be subjected to exhaustive psychological testing, staged situations, and direct observation of their emulation software to form clear pictures of their loyalties. … Members of a superorganism could consent to deletion after a limited time to preempt any such value divergence. … After a short period of work, each copy would be replaced by a fresh copy of the same saved state, preventing ideological drift.

Shulman also suggests concrete em coordination gains:

Many of the productivity advantages stem from the ability to copy and delete emulations freely, without objections from the individual emulations being deleted. … Emulations could have their state saved to storage regularly, so that the state of peak productivity could be identified. … whenever a short task arises, a copy of the peak state emulation could be made to perform the task and immediately be deleted. … Subject thousands or millions of copies of an emulation to varying educational techniques, … [and] use emulations that have performed best to build the template for the next “generation” of emulations, deleting the rest. … Like software companies, those improving emulation capabilities would need methods to prevent unlimited unlicensed copying of their creations. Patents and copyrights could be helpful, … but the ethical and practical difficulties would be great. … A superorganism, with shared stable values, could refrain from excess reproduction … without drawing on the legal system for enforcement.

On the general abstract argument, we see a common pattern in both the evolution of species and human organizations — while winning systems often enforce substantial value sharing and loyalty on small scales, they achieve much less on larger scales. Values tend to be more integrated in a single organism’s brain, relative to larger families or species, and in a team or firm, relative to a nation or world. Value coordination seems hard, especially on larger scales.

This is not especially puzzling theoretically. While there can be huge gains to coordination, especially in war, it is far less obvious just how much one needs value sharing to gain action coordination. There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination. It is also far from obvious that values in generic large minds can easily be separated from other large mind parts. When the parts of large systems evolve independently, to adapt to differing local circumstances, their values may also evolve independently. Detecting and eliminating value divergences might in general be quite expensive.

In general, it is not at all obvious that the benefits of more value sharing are worth these costs. And even if more value sharing is worth the costs, that would only imply that value-sharing entities should be a bit larger than they are now, not that they should shift to a world-encompassing extreme.

On Shulman’s more concrete argument, his suggested single-version approach to em value sharing, wherein a single central em only allows (perhaps vast numbers of) brief copies, can suffer from greatly reduced innovation. When em copies are assigned to and adapt to different tasks, there may be no easy way to merge their minds into a single common mind containing all their adaptations. The single em copy that is best at doing an average of tasks, may be much worse at each task than the best em for that task.

Shulman’s other concrete suggestion for sharing em values is “psychological testing, staged situations, and direct observation of their emulation software to form clear pictures of their loyalties.” But genetic and cultural evolution has long tried to make human minds fit well within strongly loyal teams, a task to which we seem well adapted. This suggests that moving our minds closer to a “borg” team ideal would cost us somewhere else, such as in our mental agility.

On the concrete coordination gains that Shulman sees from superorganism ems, most of these gains seem cheaply achievable via simple long-standard human coordination mechanisms: property rights, contracts, and trade. Individual farmers have long faced starvation if they could not extract enough food from their property, and farmers were often out-competed by others who used resources more efficiently.

With ems there is the added advantage that em copies can agree to the “terms” of their life deals before they are created. An em would agree that it starts life with certain resources, and that life will end when it can no longer pay to live. Yes there would be some selection for humans and ems who peacefully accept such deals, but probably much less than needed to get loyal devotion to and shared values with a superorganism.

Yes, with high value sharing ems might be less tempted to steal from other copies of themselves to survive. But this hardly implies that such ems no longer need property rights enforced. They’d need property rights to prevent theft by copies of other ems, including being enslaved by them. Once a property rights system exists, the additional cost of applying it within a set of em copies seems small relative to the likely costs of strong value sharing.

Shulman seems to argue both that superorganisms are a natural endpoint of evolution, and that ems are especially supportive of superorganisms. But at most he has shown that ems organizations may be at a somewhat larger scale, not that they would reach civilization-encompassing scales. In general, creatures who share values can indeed coordinate better, but perhaps not by much, and it can be costly to achieve and maintain shared values. I see no coordinate-by-values free lunch.

I am again glad to see Carl Shulman engage the important issue of social change given whole brain emulation, but I fear the Singularity Institute obsession with making a god to rule us all (well) had distracted him from thinking about real em life, as opposed to how ems might realize make-a-god hopes.

Added 8a: To be clear, in a software-like em labor market, there will be some natural efficient scale of training, where all the workers doing some range of tasks are all copies of the same intensely trained em. All those ems will naturally find it easier to coordinate on values, and can coordinate a bit better because of that fact. There’s just no particular reason to expect that to lead to much more coordination on larger scales.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://entitledtoanopinion.wordpress.com TGGP

    This is admittedly a gut reaction, but it irks me to see you argue against Schulman by lumping his argument together with Yudkowsky’s and the big-single-god idea. One time at the beginning is alright, but then at the end you say “the Singularity Institute obsession with making a god to rule us all (well) had distracted him from thinking about real em life, as opposed to how ems might realize their make-a-god hopes”. Could they not as easily (and lamely) reply that you are distracted by your 20th century economist’s obsession with individualist models featuring property rights, contracts etc that didn’t exist before agriculture? My recollection of Schulman’s prior arguments with you reflected less sci-fi theology than a Hobbesian state of nature colliding with winner-take-all dynamics.

    If values are not sufficient for coordination, is the issue information?

  • http://entitledtoanopinion.wordpress.com TGGP

    You may have borrowed the prison guard analogy from Judith Harris’ excellent “The Nurture Assumption”, but I don’t think it fits. Prison guards don’t intend to inspire or even reform prisoners. They merely enforce order and other people, whether the warden, minister or other social worker type are charged with improving prisoners.

    • http://entitledtoanopinion.wordpress.com TGGP

      Doh, this comment is for the Grace-Hanson podcasts.

  • https://plus.google.com/106597887376283858570/posts Kaj Sotala

    Robin: While you’re at it, might you be persuaded to say something about my and Harri Valpola’s Coalescing Minds paper (currently under review for the International Journal of Machine Consciousness) as well?

    Abstract: We present a hypothetical process of mind coalescence, where artificial connections are created between two or more brains. This might simply allow for an improved form of communication. At the other extreme, it might merge the minds into one in a process that can be thought of as a reverse split-brain operation. We propose that one way mind coalescence might happen is via an exocortex, a prosthetic extension of the biological brain which integrates with the brain as seamlessly as parts of the biological brain integrate with each other. An exocortex may also prove to be the easiest route for mind uploading, as a person’s personality gradually moves away from the aging biological brain and onto the exocortex. Memories might also be copied and shared even without minds being permanently merged. Over time, the borders of personal identity may become loose or even unnecessary.

    (And now upon re-reading that abstract, I realize that we should possibly have emphasized a bit more that we expect mind coalescence to become very common once uploading is commonplace, regardless of the route.)

  • http://timtyler.org/ Tim Tyler

    Very large scale cooperation happens rather naturally – which is why modern human societies typically use a monopolies and mergers commission to prevent such monopolies from rivalling the governments that allowed them to exist in the first place.

  • http://hanson.gmu.edu Robin Hanson

    TGGP, I doubt it is a mere coincidence that in the context of an institute devoted to figuring out how to create a good singleton someone wrote a paper about how ems would evolve into near-singletons. Yes, people can and do try to dismiss property-based economics as a mere relic of the farming era. I’d argue for the robustness of economics in also analyzing industry and foraging.

    Kaj, your focus is on the feasibility of merging minds, not on the social consequences. I don’t have much comparative advantage on analyzing such feasibility.

  • http://goodmorningeconomics.wordpress.com jsalvatier

    @RH so that suggests that the real topic should first be “what are the forces that make property right regimes stable right now”? and subsequently “how will those forces change?”

  • Carl Shulman

    TGGP, you correctly suggest that I am trying to attend to Hobbesian state-of-nature dynamics (of the sort that characterize the international system, and some aspects of the control of military and law enforcement agencies) that I think Robin’s picture neglects.

    Robin, I have limited time in the week before the Singularity Summit, so I won’t necessarily be able to have back-and-forth conversation on this before then, but I will try to make a few points now.

    The core idea that led me to write that working paper was that an organization with one trusted brain emulation that was loyal to a cause (to the point of copy self-sacrifice) could exhaustively test those loyalties (through experimentation with its copies) and then produce as many copies as needed to perform tasks for which loyalty was important. Further, after each round of testing, if that loyalty was reasonably stable for some period, then any task taking less time could be done by a copy created from a state saved after testing, and deleted after the task to prevent divergence. At the least, this could include operating military equipment, interpreting population surveillance data, and forceful police action. Since in fact regimes regularly fall due to a shortage of loyal security forces, this seemed like something that would be tempting to existing governments (capable of finding or training some candidates with appropriate loyalties), or to new ones created by an emulation “copy-family.”

    Also, in principle two parties that trusted a third party could exhaustively test and rely on copies of that third party to enforce deals with them: e.g. the different members of a military alliance could entrust their nuclear arsenals to a single force after exhaustively examining the loyalties and motivations of emulations selected and copied to staff the weapons platforms.

    I would not place high confidence in particular scenarios involving these capabilities, just as I don’t place high confidence in brain emulations coming before other forms of AI, including brain-inspired non-emulation AI, and the language in the paper reflects that. The post misreports me as saying that various things will happen, or are “natural endpoints,” when I am instead arguing that these emulation capabilities should substantially boost our expectations of future coordination given emulations, aggregating across scenarios.

    Values tend to be more integrated in a single organism’s brain, relative to larger families or species, and in a team or firm, relative to a nation or world.

    It is not a coincidence that genetically homogenous cells in an individual organism, which can (save edge cases, e.g. meiotic drive) only boost the reproductive success of their genes via contributing to the functioning of the whole organism, are able to cooperate well, or that genetically close individuals are more cooperative. Humans have been selected to serve these local alliances at the expense of the larger group, only partially offset by self-domestication. I used the superorganism language to highlight the similarities between upload groups with replication controlled through a centralized process based on group behavior and hive insects, colonial organisms, and fully integrated multicellular life.

    In general, it is not at all obvious that the benefits of more value sharing are worth these costs. And even if more value sharing is worth the costs, that would only imply that value-sharing entities should be a bit larger than they are now, not that they should shift to a world-encompassing extreme.

    States are already very large, with the US and China making up about a third of world GDP, and the next 8 countries making up another third. Most of the distance from forager bands to singleton has already been covered.

    On Shulman’s more concrete argument, his suggested single-version approach to em value sharing, wherein a single central em only allows (perhaps vast numbers of) brief copies, can suffer from greatly reduced innovation. When em copies are assigned to and adapt to different tasks, there may be no easy way to merge their minds into a single common mind containing all their adaptations. The single em copy that is best at doing an average of tasks, may be much worse at each task than the best em for that task.

    As I said in the paper and in our face-to-face discussions, this is most useful for tasks that must be done many times briefly and separately. Some such tasks seem quite important, e.g. monitoring other brain emulations, acting as security forces, etc. In others more cumulative training would be needed, requiring further testing as I said.

    Shulman’s other concrete suggestion for sharing em values is “psychological testing, staged situations, and direct observation of their emulation software to form clear pictures of their loyalties.” But genetic and cultural evolution has long tried to make human minds fit well within strongly loyal teams, a task to which we seem well adapted. This suggests that moving our minds closer to a “borg” team ideal would cost us somewhere else, such as in our mental agility.

    Evolutionary pressure was not only on team loyalty, but also on the “homo hypocritus” betrayal of group interests for selfish gain that you often talk about on this blog, leading to an arms race between deception and detection. With that dynamic, an exogenous improvement in technology for detecting and stably copying motivations changes the tradeoffs and can be a straightforward improvement, as with streetlights reducing crime.

    Once a property rights system exists, the additional cost of applying it within a set of em copies seems small relative to the likely costs of strong value sharing.

    Property rights are enforced by security forces (military and police) and adjudicated (at the top level) by the governments controlling those forces. As I said in the paper, reliance on many copies of ems with tested shared values is a substitute for external enforcement, and so more valuable for the more Hobbesian, such as controlling the security forces expected to enforce the laws, creating new enforcement organizations, or operating with inadequate legacy legal systems. And indeed, the problem of control over the security forces today is one where we see extensive indoctrination, testing and selection for loyalty (e.g. drawing from populations with favorable attitudes towards the government being served). We fairly frequently see regimes survive or fall, and property rights reassigned or not, based on whether security forces choose to support a revolution or coup.

    Individual farmers have long faced starvation if they could not extract enough food from their property, and farmers were often out-competed by others who used resources more efficiently.

    I agree that given a strong legal system and enforcement mechanisms, we could have your scenario where em templates willingly have copies made, and then those copies are shortly unwillingly slaughtered. Yes, starving peasants and Roman slaves mostly did not rebel, and were mostly massacred when they did. However, there is a separate issue of the motivations of the Roman citizens or medieval knights who did the enforcement, who swore religion-backed oaths of loyalty, cultivated camaraderie, received better treatment to evoke motives of reciprocity, etc.

    • http://entitledtoanopinion.wordpress.com TGGP

      How coordinated is the U.S?

    • http://hanson.gmu.edu Robin Hanson

      Carl, I can easily see how an autark who rules a political/military regime, and is concerned about internal instability, might find it useful to create many short-lived copies assigned to tasks where loyalty matters much more than skills and experience, or where loyalty matters somewhat and appropriate skills and experience somewhat match the autark’s.

      From this we should expect such em autarkies to be more stable against internal rebellion or coups, and to be more willing to adopt policies that might otherwise risk rebellion. But this doesn’t suggest that such regimes will encompass larger geographic or economic scopes.

      However, quotes like the following led me to see you as making much stronger claims about such em clans displacing most other ems in most social and economic niches:

      The benefits of a willingness to self-sacrifice would be extremely high for human brain emulations. Specifically, a superorganism of such entitites could realize a much higher level of economic productivity than narrowly self-concerned individuals. …Even a modest productivity advantage for a new lineage of emulations could allow it to outbid competing emulations for resources, rendering the necessities of existence unaffordable for self-concerned emulations dependent on wages. … The combination of competitive dynamics within and between regulatory jurisdictions would thus tend to result in a predominance of emulations fitting our definition of superorganisms.

      These sound like stronger claims than just “boost[ing] our expectations of future coordination given emulations, aggregating across scenarios.”

  • Rafal Smigrodzki

    The ems-based superorganism scenario has two aspects that differentiate it from a human civilization: control of reproduction and ease of altering minds. The feedback loops between system-wide processes and individual human reproduction are long and indirect, which leads to poor (from society’s “viewpoint”) function – e.g. female hypergamy is a form of individually fit but globally detrimental behavior that maintains antisocial traits in men. Human minds are rife with the effects of such evolutionary process. A superorganism would be defined by its ability to impose direct and effective control over the reproduction of its constituent parts (no longer “individuals”), as Carl describes in his article – and this should produce dramatic effects on the characteristics of its minds. Add to it advanced methods of designing minds and you should have a new quality of entity, minds thoroughly adjusted to social life without the possibility of defection from its goals, without the ever-pervading hypocrisy that Robin so rightly thrust in the center of our attention. These minds should be capable of more than just building a slightly larger organization. Whether there would be direct value-file-sharing, modified individual property rights or other forms of regulation acting within the superorganism, is a technical issue we mere humans are probably incapable of analyzing.

    I do think that Robin is underestimating the degree of novelty and the jump in performance inherent in the superorganism. Humans are now at the level of slime molds when it comes to social cohesiveness, the superorganisms will be as cohesive as the cells of an individual human. I would expect the difference in capabilities will be just as dramatic.

    • http://hanson.gmu.edu Robin Hanson

      Rafal, the property rights institutions I outline are also capable of altering minds and controlling their reproduction, to increase efficiency.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    Carl Shulman is very hard to out-rational. I hope the two of you mix it up in more depth, perhaps in this comments thread.

  • Michael Wengler

    I think its remarkable that in an article on “superorganisms,” essentially something a lot more controlling of individuals than your average totalitarian dictatorship, the words rooted in “slave” are used only twice, but in a previous article on major league baseball, the thesis of the article seems to be that the highly paid players are slaves because of some of the contractural limitations they have agreed to in order to be paid to play.

    On the superorganism idea, I don’t think it is shared values at all that are needed to make this thing work, but rather it is a particular set of values. If all ems share the value the “the individual is paramount” then you aren’t going to have much of a superorganism advantagte, same if the ems are all psychopaths. Whereas if most of the ems have as a value “I value the superorganism’s needs as expressed by this particular command hierarchy more highly than I value my own individual life or desires” then it doesn’t matter whether those ems share other values or are diverse in their other values.

  • http://daedalus2u.blogspot.com/ daedalus2u

    Michael, I completely agree. The the idea of intellectual property and patents would have some utility within a superorganism is hard to imagine when all parts would necessarily value the welfare of the superorganism over their own.

    That is supposed to be the idea in tribes, where the members of the tribe value the tribe over themselves. This heuristic can break down. It apparently has broken down in the US with certain political factions valuing party over country. It has certainly broken down with AGW denialism where certain people value their own short-term profit over the adverse effects of long term global warming.

    Why anyone who values their own personal welfare over any and every larger group somehow can imagine that they could trick a superorganism into thinking they were a loyal subject by sufficient signaling is quite strange. It is obviously false signaling.

    True signalers of large group loyalty would value the welfare of all humans the most. Membership in any subset of humans that values the subset more than the whole is obviously comprised of members that cannot be loyal to a larger group because they are not loyal to the largest group. If individual superorganisms are not loyal to the group of superorganisms, then the group is not stable and cannot last long term.

    I think what this means is that anyone selfish enough to prioritize their own speculative cryonic preservation over the welfare of large numbers of humans, can’t be someone who could be loyal to a superorganism. Why would any superorganism revive an organism that will virtually certainly be disloyal?

  • Pingback: Overcoming Bias : Welcome Carl Shulman

  • Pingback: Overcoming Bias : Cooperate Or Specialize?