Cooperate Or Specialize?

Futurists sometimes get excited about new ways to encourage cooperation in Prisoner’s Dilmena like games. For example, future folks might interact via quantum games, future AIs might show each other their source code, or future clans of em copies might super-cooperate with one another. Folks who know just enough economics to be dangerous sometimes say that this “changes everything”, i.e., that future economies will be completely different as a result. In fact, however, not only do we already have lots decent ways to encourage cooperation, such as talking and reputation, we also consistently forgo such ways to better encourage flexibility and specialization.

As I reviewed in my last post, we have strong reasons and abilities to cooperate within family clans, especially when such clans heavily intermarry and live and work closely together over many generations. And our farming era ancestors took big advantage of this. To function and thrive, however, our industry era economy had to suppress such clans, to allow more flexibility and specialization. Industry needs people to frequently change where they live, what kinds of jobs they do, and who they work with, and to play fair within industry-era reimagined firms, cities, and nations. Strong family clans instead encouraged stability and nepotism, and discouraged people from moving to cities and new jobs, and from cooperating fairly with and showing sufficient loyalty to other families within shared firms, cities, and nations.

Our industry era institutions consistently forgo the extra cooperation advantages of strong family clans, to gain more flexibility and specialization. This is now a huge net win. Our descendants are likely to similarly forgo advantages from new ways to cooperate, if those similarly reduce future flexibility and specialization. For example, future societies of brain emulations are likely to be wary of strongly self-cooperating clans of copies of the same original human. While such copy clans have even stronger reasons to cooperate with each other than family clans, copy clans might cause future organizations to suffer even more than do family-based firms, cities, and nations today from clan-based nepotism, and from low quality and inflexible matches of skills to jobs. Ems firms and cities are thus likely to be especially watchful for clan nepotism, and to avoid relying too heavily on any one clan.

Yes game theory captures important truths about human behavior, including about costs we pay from failing to fully cooperate. But prisoner’s dilemma style failures to cooperate in simple games comprises only a tiny fraction of all the important things that can and do go wrong in a modern economy. And we already have many decent ways to encourage cooperation. I thus conclude that future economies are unlikely to be heavily redesigned to take advantage of new possible ways to encourage prisoner’s dilemma style cooperation.

GD Star Rating
loading...
Tagged as: , , , ,
Trackback URL:
  • Aron Vallinder

    While it might be correct that our em descendants (if there are any) will cooperate, my impression is that much of the interest in finding new ways to encourage cooperation comes from assuming a picture of the future in which AIs are very different from humans. Are you simply saying that this picture is implausible, or are you saying that even if we assume this picture, finding new ways to encourage cooperation isn’t very important?

    • Robin Hanson

      Most important features of our economy do not depend on particulars of how human minds are organized.

      • Jess Riedel

        But the tendency toward nepotism and especially the crowding out of distant cooperation by local cooperation–these things *do* strongly depend on the particulars of human minds, right?

      • http://overcomingbias.com RobinHanson

        Not obviously, no.

      • Rafal Smigrodzki

        Robin, you are making a highly counterintuitive, even extraordinary claim. Would you care to elaborate on the particulars?

      • http://overcomingbias.com RobinHanson

        It is the ordinary presumption among economists, not at all extraordinary there.

      • Stephen Diamond

        I suppose you mean that since much economics follows from simple rationality assumptions, the results don’t depend on the specific characteristics of the human mind, which, if they deviate from rationality, can only be irrational.

        But this assumes that AI will only subtract irrationalities and not add any, ignoring the likelihood that overcoming irrationalities in human behavior won’t substitute different irrationalities or that for other reasons, they won’t need to implicate other irrationalities.

        Perhaps this is partly based on a latent assumption that perfect rationality is a logical possibility. (For rebuttal, see “Buridan’s ass and the psychological origins of objective probability” — http://tinyurl.com/bqrjjxh )

      • Rafal Smigrodzki

        I notice you did not elaborate on the question, even though at least three of the commenters asked it.

      • Philip Goetz

        You have the causality backwards. We have only one type of mind; therefore, we have a simple economy and economic theory that are possible with homogenous agents. Our economy is the degenerate case.

  • adrianratnapala

    If AI happens, or even brain-level emulation, it might actually blur the distinction between individuals and clans since it might be easy to create a worker on demand, use her and then re-absorb any knowledge gained into the collective.

    When I search Google, I don’t really care that it is a massively distributed swarm of independent Unix processes, just so long as is just barely consistent enough to be recognisable.

    If some future AI (or Em.) was limited to work could be done on one nearly-serial piece of hardware, she would not get the same brand recognition or economies of scale as a massively parallelised competitor.

  • Douglas Knight

    How does the strong local cooperation of clans interfere with specialization?

    I think you have it backwards. It is cooperation outside the clan that allows specialization. Clans are a tradeoff between strong local cooperation and weak distant cooperation. New cooperation technology could increase both.

  • Paul Christiano

    Talking and reputation seem like good solutions; but how well do such solutions work in the long run? Many people have the intuition that the success of those systems to date depends on altruism, humans’ inability to lie, humans’ insufficient cutthroatness, the absence of sophisticated manipulators or large-scale sybil attacks, benevolent regulators, etc. I think libertarians tend to think that everything will just work out, but I think the average American intellectual disagrees.

    The general question is “how much can you achieve with reputation systems?” and it seems like many different answers are a priori conceivable, including “much less than modern society achieves.” This has been the subject of my recent research, and I now think that as a theoretical question the answer is “everything you could hope for.”

    It seems obvious that some forms of cooperation come with costs, such as increased rigidity. But I don’t see why you would infer that cooperation itself comes with costs. This seems like a particularly extreme and indefensible form of the reasoning “Nature trades off X for Y, so probably it is impossible to obtain X without trading off Y.” I agree that giant em clans come with big costs, though I think you understate the gains. I expect that e.g. better reputation systems won’t come with significant costs in terms of flexibility or specialization, because I don’t see any good reason why they would. Of course I wouldn’t expect such schemes to “change everything,” I would mostly hope for them to let us capture familiar gains from cooperation in a modestly broader ranger of contexts (which still may be a big gain).

    • Robin Hanson

      Yes the success of our methods of cooperating probably depend on many details, but that is likely to be true for all the other possible ways to cooperate as well. I doubt there is a single general solution for all contexts. Glad your theory suggests reputation can be taken a long way, though I suspect that result also depends on some details.

      I don’t think that cooperation necessarily comes with costs, but for now and the foreseeable future I guess it will in fact come with costs, because all the ways I know to do it come with such costs. Yes capturing more gains in a modestly broader range of contexts is a more reasonable goal and expectation.

    • IMASBA

      “I agree that giant em clans come with big costs, though I think you understate the gains. I expect that e.g. better reputation systems won’t come with significant costs in terms of flexibility or specialization, because I don’t see any good reason why they would.”

      Without clans you need another mechanism to distribute societal gains back to its members (like social security and organized charity), if this doesn’t exist then most inidividuals would be worse off than in a clan society because most of the gains would go to a handful of people at the top.

  • Kevin Dick

    It would be interesting for you to catalog all the “impossibility theorems” that limit the amount of “easy” cooperation: Prisoner’s Dilemma, Arrow, Myerson-Satterthwaite, Gibbard Sattethwaite, Holmstrom’s Theorem, etc. I’m sure there must be others

  • http://www.facebook.com/michael.vassar.58 Michael Vassar

    What ever happened to “cooperation is hard”? I mean, looking at humans, in practice even the minimally difficult cooperation with one’s near-future self seems to be hard enough to be one of the main determinants of people’s outcomes.

    • Robin Hanson

      Coordination is hard. Cooperating in prisoner’s dliemna type situations is only a small part of coordination.

  • Rafal Smigrodzki

    AI minds might carve at different joints than humans do. Remember that a human consists of both circuitry needed to calculate goals, including goals evolved in clan-based societies, and circuitry for solving various technical tasks. An AI, even one inspired by a human upload, might be dissected to separate these parts, and such parts could be traded between larger entities. In this way goal systems could remain stable within a group of related AI minds, even as trade with other such groups provides the flexibility and diversity needed to solve various technical problems, thus enabling both close cooperation and specialization/flexibility. The relevant analogy might be bacteria that are clonal yet capable of exchanging task-relevant information (e.g. antibiotic resistance genes) with unrelated bacteria.

    It is too much of a simplification to expect that the uploaded world will consist of human-like minds still under the influence of evolutionary pressures similar to what shaped us. Everything could change, because the nitty-gritty technical details that affect fitness in the substrate are likely to be different from the physical and chemical constraints that shaped us.

  • Philip Goetz

    I am going to guess that you seem to think cooperation and specialization are at odds because you’re thinking in terms of game theory, which assumes that agents are indivisible and interchangable. In the em world, we don’t even know how to assign responsibility for an action to an agent, or how to compute the change in social utility when an agent copies itself or gives itself an upgrade, or how to define death or identity.

    A single human can be regarded as a collection of AIs. The hand is specialized, yet cooperates with the eye.