Let’s Not Kill All The Lawyers

The first thing we do, let’s kill all the lawyers. King Henry VI

Commenters on yesterday’s law post are obsessed with the scenario where future robots exterminate humans.  From my ’94 essay If Uploads Come First:

What if short people revolt tonight, and kill all the tall people?  In general, most societies have many potential subgroups who could plausibly take over by force, if they could coordinate among themselves. But such revolt is rare in practice; short people know that if they kill all the tall folks tonight, all the blond people might go next week, and who knows where it would all end? And short people are highly integrated into society; some of their best friends are tall people.

In contrast, violence is more common between geographic and culturally separated subgroups. Neighboring nations have gone to war, ethnic minorities have revolted against governments run by other ethnicities, and slaves and other sharply segregated economic classes have rebelled.

Thus the best way to keep the peace with uploads would be to allow them as full as possible integration in with the rest of society. Let them live and work with ordinary people, and let them loan and sell to each other through the same institutions they use to deal with ordinary humans. Banning uploads into space, the seas, or the attic so as not to shock other folks might be ill-advised. Imposing especially heavy upload taxes, or treating uploads as property, as just software someone owns or as non-human slaves like dogs, might be especially unwise.

It is always possible in principle for everyone but some small group to agree to violate the previous law and peace and exterminate or enslave that small group.  We could for example do this to retirees today, and avoid their being “useless parasites” on society.   We could similarly eliminate some sick, weak, mentally ill, stupid, or idle rich.  But we don’t.  Why?

We would suffer large costs to coordinate to do this, so the group we “eat” would need to be large enough to make this predation pay.  And this first act of coordination would lower the cost of similar coordinations on more small groups after that, and so each of us would acquire an heightened fear of being “eaten” in further rounds of extermination or enslavement.  These “slippery slope” expectations greatly add to the perceived cost of any first round of such coordinated predation.

This predation coordination is also much more expensive for groups that are well integrated into our society.  Such groups would hear early about the proposal to eat them, retaliate against the proposers, suggest other groups to eat instead, and in the worse case actively resist plan implementation.  Their elimination would disrupt their many relations with others, and harm many others who care about them or see such predation as immoral.

As long as future robots remain well integrated into society, and become more powerful gradually and peacefully, at each step respecting the law we use to keep the peace among ourselves, and also to keep the peace between them, I see no more reason for them to exterminate us than we now have to exterminate retirees or everyone over 100 years old.  We live now in a world where some of us are many times more powerful than others, and yet we still use law to keep the peace, because we fear the consequences of violating that peace.  Let’s try to keep it that way.

GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • Grant

    The costs of large-scale robot coordination could be vastly less than large-scale human coordination.

    It still seems to me that peaceful integration is the best option, but I rather doubt the median voter will share that opinion. Another key point may be to liberalize medical and “wetware” experimentation, allowing humans to augment their own intelligence as technology increases.

  • Robert Johnson

    “We would suffer large costs to coordinate to do this…”
    If by coordination costs you mean that it would be destabilizing to the society, then yes. But I think the direct costs of simply coordinating aggressive action by one group against another are actually quite low. In Rwanda it was done through some existing social and commercial networks, and through access to the airwaves.

    • http://hanson.gmu.edu Robin Hanson

      The fact that it has happened before does not imply its costs are low; it usually does not happen.

      • Robert Johnson

        I didn’t mean that because it happened before it is implied that costs are low. I meant that the methods of coordination used in Rwanda have low costs.

        But, since you insist, even if it is true that it “doesn’t usually happen” (which can be argued against with countless examples), infrequency does not imply that the costs are high.

  • Carl Shulman

    An initial predatory alliance can lock-in its structure with techniques for mind modification and verification.

  • http://timtyler.org/ Tim Tyler

    Alas, this is another one of those ridiculous “uploads” posts.

    So far upload enthusiasts seem to have relied on the argument that at least uploads will happen sometime this century – whereas who knows when we will have a decent engineered intelligence.? That argument makes no sense.

    I think the credibility of the upload scenario needs much better foundations before enthusiasts waste more of their time by building on top of it. On the face of it, an upload scenario is pretty ridiculous.

    It suggests that not one of the other possible shortcuts to building machine intelligence will work – whereas what we actually see is machines playing chess, recognising speech, playing the stock market, and acting as oracles – using no brain scanning at all.

    Why don’t real engineers scan brains to solve these kinds of problems? It’s because the approach is useless and has dim prospects.

    • loqi

      It’s funny that you call out Caplan in the previous post for dredging up a tired argument (and rightly so), yet here you are tossing out “That argument makes no sense” and “an upload scenario is pretty ridiculous” with nothing but the most token support.

      I seem to recall you making this point several times in the past. Posts like this one that take uploading for granted aren’t really giving you anything new to argue against, so perhaps you could just post this link in the future.

      Alternatively, have you considered writing a top-level post on the topic for lesswrong? I for one would be interested in the resulting discussion, which should be significantly more focused than anything likely to emerge in response to your simplistic “I’m still right” comment.

      • http://timtyler.org/ Tim Tyler

        On reflection, this post isn’t really about uploads – so perhaps this is the wrong spot for a discussion.

  • http://entitledtoanopinion.wordpress.com TGGP

    The title of this post reminded me: what do you think of Scalia’s recent claim that too much valuable human capital has been invested in lawyering?
    http://volokh.com/2009/10/05/are-lawyers-a-productive-part-of-society/
    http://volokh.com/2009/10/06/too-many-lawyers-or-too-many-laws/

    • http://hanson.gmu.edu Robin Hanson

      I agree the law we now have is far from ideal, and a better law would induce fewer lawyers.

      • http://robertwiblin.wordpress.com Robert Wiblin

        Because law is substantially zero sum competition?

      • http://hanson.gmu.edu Robin Hanson

        No, because the law is needlessly ambiguous and legal process is needlessly expensive.

      • Robert Johnson

        I assume you say “needlessly expensive” because you believe that the law could be made clearer, and that that would make court cases shorter or less common, or something similar to that. Is that it?

        What’s your evidence that such improvements are possible? What would better law look like? How would we know that it’s better? What’s the risk of unintended consequences? Why aren’t we using better law now? Isn’t this a problem that lots and lots of very intelligent people have been struggling with for a very long time? What insight do you have that makes possible what they have failed to accomplish?

  • Jeffrey Soreff

    I don’t see law as a particularly effective barrier to genocide on a timescale of longer than a decade or so. The last century saw genocide in Germany, Russia, China, Cambodia, Rwanda… I see law as essentially frozen politics, reflecting alliances that hold enough power to rewrite it, albeit with a time lag. When a group which wants to eliminate opponents gets enough power to do so, they regularly either rewrite the law or ignore it. I’d expect that in an economy dominated by uploads, the law would eventually be rewritten to redefine unmodified humans as having no legal rights, just as animals have no (or almost no) legal standing today.

    Even if uploads never happen, but sufficiently powerful AI systems are constructed to replace humans in all of the tasks needed to replicate the AIs, I’d ultimately expect a similar outcome. I’d expect AI systems to be embedded within (initially) human organizations such as corporations – a fair fraction of which act as if “with depraved indifference to human life” were already a corporate mission statement _today_. If they had the option to replace employees with cheaper hardware, I see no reason why they would refrain from doing so – and no reason why the ultimate result of this process would be survivable by unmodified humans.

  • Psychohistorian

    This fear of a future robot takeover seems to contain a significant anthropomorphizing of robots. There’s nothing fundamentally bad about being enslaved; it’s our human-ness that makes slavery seem unpleasant, and that makes us want to overthrow or excise disagreeable subgroups of the population. If robots are not programmed to mimic human attitudes, there’s no particular reason why they should resent “mistreatment” or pursue dominance.

    Also, it seems if we continue a free-market economy, it’s quite likely that robot labor could crowd out human labor rather quickly if we kept current laws, reducing humans to subsistence or near-subsistence levels without a fairly robust welfare state (or property rights on robots and their production). RH may not have a problem with this, but human voters almost certainly will.

    • Psychohistorian

      Whoops, fell victim to my own fallacy: no particular reason robots would desire accumulating capital. Though if they don’t, it’s rather unclear how they’d function within a market economy. Seems they’d be programmed to be productive without desiring lots of resources or capital. This sounds rather like slavery, actually, but if they’re cool with it, so am I.

  • http://blog.efnx.com Schell Scivally

    I think that the window of time in which humans can interact with equally endowed artificial intelligences will prove to be narrow enough as to slip by without us even noticing. After that machine intelligences seem likely to be so advanced that they would view us as we currently view cells and other rather ‘simple’ beings.

  • Johnicholas

    Preface: I agree with the post wholeheartedly.

    Free-market capitalism is something like an arena inside of a larger human society. Some pairwise voluntary exchanges are allowed in that arena, and other exchanges are forbidden. The bounds of what property is and isn’t are set by the larger society, which is generally coercive and not pairwise-voluntary, and therefore not free-market capitalism.

    If an agent or group of agents is sufficiently powerful (sufficiently wealthy) inside of the free-market arena, they may be able to change the rules of the arena to their advantage – e.g. rent-seeking.

    Does this mean that in order for society to continue in a competitive, free-market manner, we depend on the success of anti-trust and the resistance of elected officials to special-interest lobbies?

    How long can we expect society to continue to be competitive?

  • http://www.weidai.com Wei Dai

    As Jeffrey Soreff pointed out, these kinds of revolts aren’t “rare” even on a human timescale. From Wikipedia:

    In rural China, political movements against landlords caused the humiliation and death of many former land owners. Immediately following the land reform period came the Three-anti and Five-anti Movements (三五反), as well as the beginning of the Anti-Rightist Movement, when property owners and businesspeople were labeled as “rightists” and purged. Some scholars put the figure of those killed during this period at least one million.

    Robin, how do you explain why coordination costs and “slippery slope” expectations failed to prevent this from happening?

    • http://hanson.gmu.edu Robin Hanson

      Wikipedia knows of many rare events. Coordination costs are not infinitely high, but the are high enough to make such events rare, especially today.

      • http://www.weidai.com Wei Dai

        Well, that’s what I’m asking: what is different between today and yesterday that makes such revolts less likely now? Is this condition likely to continue to hold in the future?

      • http://www.weidai.com Wei Dai

        To explain why I’m asking this question, in case it’s not clear, I think unless you can give an explanation of why such revolts happened in the past, and why those reasons will not apply in the future, you’re not going to be able to overcome the strong intuition most people have that robots will revolt against humans.

      • http://hanson.gmu.edu Robin Hanson

        What gave you the impression I was offering a guarantee that there would never ever be any future revolts or violations of a legal peace? Landlords in ancient rural China were not that well integrated into the peasant society; they kept themselves a world apart, offered little value, held themselves as higher status, and kept a large fraction of income. If humans try to do that to robots, they may well get revolts.

      • http://www.weidai.com Wei Dai

        Robin, many people want to keep themselves apart, have higher status, keep a large fraction of income, etc. If laws are not a sufficient mechanism to allow them to do that without trigger a robot revolt, naturally they’re going to look into possible alternative solutions, such as programming robots with certain kinds of values.

      • Robert Johnson

        Robin, you really think that genocide is a “rare event”? There sure have been a lot of rare events int he past 200 years (and longer).

        Oddly, you are right about one thing: these extremely common events of one group of people trying to exterminate another are becoming less common in terms of % of living people who are killed during them. At least, that’s something I read online recently…

      • TGGP

        I recently trotted out the argument from normality of genocide, which nudged Chip Smith into saying that the event which gave rise to the term “genocide” didn’t qualify, and that it is in general of limited utility in understanding the conflicts it is generally applied to. He’s already a Holocaust denier though, so I guess it isn’t that big a leap.

  • Patri Friedman

    Robin correctly mentions that the likelihood of genocide or revolution depends on class homogeneity, but then somehow misses the glaringly obvious fact that humans and robots are REALLY REALLY HETEROGENEOUS! With all the effort spent to integrate races, people all over the world still look at other people and classify them into a category of “other (not my tribe” just based on skin color, and you think that dispersing robots through society will prevent humans and robots from viewing each other as different tribes?

    • http://hanson.gmu.edu Robin Hanson

      The degree of integration is more important than the degree of homogeneity. Women and men are very different, but so integrated that gendercide is unthinkable.

      • Carl Shulman

        “We could similarly eliminate some sick, weak, mentally ill, stupid, or idle rich. But we don’t. Why?”

        Among other reasons, because human voters *value* their sick, weak, mentally ill, stupid, etc relatives, and feel Far compassion towards non-relatives.

        With respect to the idle rich, extensive progressive taxation is the norm in democracies, the rich can engage in focused lobbying/support of anti-expropriation politics, and voter values (ethnic pride where the rich are of the same group, the American Dream, attitudes around the legitimacy of parents providing for their children) constrain confiscatory taxation.

        “Women and men are very different, but so integrated that gendercide is unthinkable.”
        Women and men have deep biological drives towards *valuing* some members of the opposite sex as mates and family members.

  • http://blog.jim.com James A. Donald

    Such groups would hear early about the proposal to eat them, retaliate against the proposers, suggest other groups to eat instead, and in the worse case actively resist plan implementation.

    Suppose, as seems likely, uploads think and coordinate several thousand times faster than humans running on a wet substrate. By the time wetware humans hear about the plan, it will be under way.

  • http://www.thirtysecondthoughts.blogspot.com John Clifford

    Assume that sentient robots would be inherently rational, without the flaws that make humans psychotic or irrational. If a robot can replicate itself, then it has the ability to power itself and repair itself. What does it need us for?

    Throughout history, genocide has often been seen as rational. Carthage delenda est… and the Romans killed or enslaved everyone, razed the city, and plowed salt under the fields. For the Romans, genocide was a rational response to the conflict with Carthage in that it solved the problem once and for all.

    War has been described as diplomacy by other means. What happens when the sentient robots, all communicating at the speed of light, decide to do something that humans oppose? What if they know they’re right? In that case, would killing humans be seen as a rational response?

  • nick

    I too find Robin’s repeated claims that exterminations (even ignoring the wide variety of slightly lesser persecutions, enslavements, rent-seekings, etc. in which the robots could engage) are “rare” in human history to be bizarre. There are hundreds of documented wars and genocides per century, probably vastly greater numbers of smaller-scale undocumented cases in prior centuries, and nearly a million murders per year. Furthermore, humans only have to be exterminated once. Robin has the burden of proving, against all historical knowledge, that the probability of such an extermination in any given year, by any of possibly billions of super-intelligences of widely varying values, that furthermore can rapidly evolve over the course of a single year, is extremely low. It defies all plausibility, but I think the problem here is that Robin (and alas, he is not the only one) is a student of economics (which assumes all transactions are voluntary) and not of history (which observes that they quite often are not).

    I do, however, find Eliezer Yudkowsky’s discussions of “Friendly AI” quite vague, and the distinction between Robin’s “law” and Eliezer’s “feelings” positions to be meaningless. Eliezer correctly points out that we can’t count on analogies to our own evolved psychologies to hold, because AIs will be designed rather than evolved, or at least (per genetic AI techniques) evolved in a very different environment. This being the case, what basis is there to make a strong distinction between a “law” that prevents an act and a “feeling” or “desire” that prevents an act? There’s nothing in contemporary computer architecture or design, beyond our anthropomorphizing of machines, to suggest such a strong distinction — it’s all just partial recursive functions, often with interaction and concurrence thrown in. So is there actually any major and meaningful distinction between “program the motivation” and “program the law”, a distinction that does not rely on anthropomorphism and applies whether or not the AI is designed or evolved? If so, what is that distinction, in concrete engineering or mathematical terms, and, again not anthropomorphizing by assume machines have “feelings” we can empathize with, why does it lead to different outcomes?

    • http://lesswrong.com/ Eliezer Yudkowsky

      An FAI’s values are programmed-in, facts about the AI’s initial conditions in a hopefully-proven-stable self-modifying system. (My job is to figure out how to do that exactly.)

      Robin’s laws are laws in the human sense, not in the sense of the Three Laws of Robotics; they are imposed from without by threatened sanctions, carried out from outside the AI – presumably by other AIs, who do so under threat of sanctions themselves, and so on. Robin thinks the AIs will not coordinate to change the system for fear of being the victims of their own next coordinated change.

  • http://www.aleph.se/andart/ Anders Sandberg

    Whether wars and democides (government attacks the citizens) are uncommon depends entirely on what one compares to. I am fond of examine them statistically. Looking at the Correlates of War database (http://www.correlatesofwar.org) shows that ~2.16 wars start per year, most lasting days to weeks and having intensities on the order of 10-100 people dead per day. However, there is a well-known power-law frequency-size relationship (the Richardson law) with exponent -1.31 which means that arbitrarily large wars can occur with a non-negligible probability. Rummel’s _Statistics of Democide_ appears to support another, steeper power-law for democides. Compared to wars democides appears to be more deadly: less common, but the size of the killings are often larger. The total democide death toll of the 20th century could well be on the order of 250 million.

    However, looking at the full death toll worldwide across last century the probability of being killed by democide, war or violence in general has been a pretty small risk compared to the normal medical reasons. I think this strongly supports Robin’s claim that mass killing is too costly to be indulged in for weak reasons. Rummel’s observations about how democracy strongly reduces democide risks also seem to support the idea that societies where people are integrated are less likely to get rid of them.

  • http://www.rationalmechanisms.com Richard Silliker

    How about Chipmunks with pulse lasers?

  • http://www.rationalmechanisms.com Richard Silliker

    “We live now in a world where some of us are many times more powerful than others, and yet we still use law to keep the peace, because we fear the consequences of violating that peace. Let’s try to keep it that way.”

    I know the psychopaths in this world are happy to hide behind the law.