Prefer Law To Values

On Tuesday I asked my law & econ undergrads what sort of future robots (AIs computers etc.) they would want, if they could have any sort they wanted.  Most seemed to want weak vulnerable robots that would stay lower in status, e.g., short, stupid, short-lived, easily killed, and without independent values. When I asked “what if I chose to become a robot?”, they said I should lose all human privileges, and be treated like the other robots.  I winced; seems anti-robot feelings are even stronger than anti-immigrant feelings, which bodes for a stormy robot transition.

At a workshop following last weekend’s Singularity Summit two dozen thoughtful experts mostly agreed that it is very important that future robots have the right values.  It was heartening that most were willing accept high status robots, with vast impressive capabilities, but even so I thought they missed the big picture.  Let me explain.

Imagine that you were forced to leave your current nation, and had to choose another place to live.  Would you seek a nation where the people there were short, stupid, sickly, etc.?  Would you select a nation based on what the World Values Survey says about typical survey question responses there?

I doubt it.  Besides wanting a place with people you already know and like, you’d want a place where you could “prosper”, i.e., where they valued the skills you had to offer, had many nice products and services you valued for cheap, and where predation was kept in check, so that you didn’t much have to fear theft of your life, limb, or livelihood.  If you similarly had to choose a place to retire, you might pay less attention to whether they valued your skills, but you would still look for people you knew and liked, low prices on stuff you liked, and predation kept in check.

Similar criteria should apply when choosing the people you want to let into your nation.  You should want smart capable law-abiding folks, with whom you and other natives can form mutually advantageous relationships.  Preferring short, dumb, and sickly immigrants so you can be above them in status would be misguided; that would just lower your nation’s overall status.  If you live in a democracy, and if lots of immigration were at issue, you might worry they could vote to overturn the law under which you prosper.  And if they might be very unhappy, you might worry that they could revolt.

But you shouldn’t otherwise care that much about their values.  Oh there would be some weak effects.  You might have meddling preferences and care directly about some values.  You should dislike folks who like the congestible goods you like and you’d like folks who like your goods that are dominated by scale economics.  For example, you might dislike folks who crowd your hiking trails, and like folks who share your tastes in food, thereby inducing more of it to be available locally.  But these effects would usually be dominated by peace and productivity issues; you’d mainly want immigrants able to be productive partners, and law-abiding enough to keep the peace.

Similar reasoning applies to the sort of animals or children you want.  We try to coordinate to make sure kids are raised to be law-abiding, but wild animals aren’t law abiding, don’t keep the peace, and are hard to form productive relations with.  So while we give lip service to them, we actually don’t like wild animals much.

A similar reasoning should apply what future robots you want.  In the early to intermediate era when robots are not vastly more capable than humans, you’d want peaceful law-abiding robots as capable as possible, so as to make productive partners.  You might prefer they dislike your congestible goods, like your scale-economy goods, and vote like most voters, if they can vote.  But most important would be that you and they have a mutually-acceptable law as a good enough way to settle disputes, so that they do not resort to predation or revolution.  If their main way to get what they want is to trade for it via mutually agreeable exchanges, then you shouldn’t much care what exactly they want.

The later era when robots are vastly more capable than people should be much like the case of choosing a nation in which to retire.  In this case we don’t expect to have much in the way of skills to offer, so we mostly care that they are law-abiding enough to respect our property rights.  If they use the same law to keep the peace among themselves as they use to keep the peace with us, we could have a long and prosperous future in whatever weird world they conjure.  In such a vast rich universe our “retirement income” should buy a comfortable if not central place for humans to watch it all in wonder.

In the long run, what matters most is that we all share a mutually acceptable law to keep the peace among us, and allow mutually advantageous relations, not that we agree on the “right” values.  Tolerate a wide range of values from capable law-abiding robots.  It is a good law we should most strive to create and preserve.  Law really matters.

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • http://www.transhumangoodness.blogspot.com Roko

    In the long run, what matters most is that we all share a mutually acceptable law to keep the peace among us, and allow mutually advantageous relations, not that we agree on the “right” values.

    You seem to be summoning up an externally imposed respect for the property rights and legal rights of humans. I.e. when the AIs are many times smarter than us and many times more powerful, you seem to be assuming that there will be some ethereal force that will prevent them from breaking the law against us and violating our property rights.

    A law, being a kind of contract, is only of any value if there is some greater power than either party who signed it around to enforce it. This seems obvious to me. Since you are very smart, you must have some reason to believe that such enforcement will occur, but I fail to see what that reason is.

    • Carinthium

      Programming them to be very different from humans and thus not selfish to the core?

  • Peter Twieg

    I’m a bit surprised by the intuitive rejection of rights for robo-Hanson. Is it because people are suspicious of the proposition that humans will be able to coexist peacefully with robots, or just sheer anthropomorphic bias?

    I wonder what people’s intuitions on when being a member of the human race ends and being a robot begins. If your brain existed in a silicon substrate and you were otherwise the same, are you a robot? Should you lose your rights? Would people even accept that this is possible? Wonder if the experimental philosophers have touched this…

  • Bryan Caplan

    You’ve heard this all before Robin, but I can’t resist. You can’t “become a robot,” any more than I can become a prime number. You might be able to make a robot that is very similar to yourself, but it still wouldn’t be you.

    Admittedly, I would probably find a robot simulation of you very congenial. But I would never be able to forget that he wasn’t the real Robin.

    • some guy

      How do you know Robin is the same Robin as a moment ago, or a moment in the future. Every moment of consciousness is a separate being anyway, different people in the same way a mind-uploaded robo-Robin along with a human-Robin a different people.

      (I lurk here, forgive me if what I said is already well-known among OB/LW users)

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    In addition to what Roko said: what kind of contracts will be enforced is chosen by the enforcer. Thus, while values of individual citizens may not matter much, values of the law do matter. The law doesn’t help wild animals, so the same way the law isn’t obliged to help humans if humans aren’t in control of it, unless something with human values is in control.

    • http://hanson.gmu.edu Robin Hanson

      We mainly enforce law because it is in our interest to do so; we fear the consequences of not doing so. It helps though that we push social norms respecting the law.

      • http://causalityrelay.wordpress.com/ Vladimir Nesov

        But what kind of law in particular do we push, and how value-specific this choice is? If “robots” hold different values, they’d probably come to enforce different law, that won’t benefit humans.

        Of course, the relation between the law and individual values isn’t trivial, so it’s fair to say that some laws that we enforce look like it’d be better without them, but outside view shows that they are useful. This however isn’t enough to say that laws are independent of values, it just isn’t obvious what the dependence is.

      • http://hanson.gmu.edu Robin Hanson

        The entire field of law and economics, which I’m teaching this semester, is devoted to showing how the laws we want depend relatively little on our specific values.

      • http://causalityrelay.wordpress.com/ Vladimir Nesov

        I expect as much, which only helps to mask the dependence that remains, on extremely unusual values.

      • http://www.transhumangoodness.blogspot.com Roko

        It seems that Islamic Sharia Law is an explicit counterexample to the idea that law is independent of values. If there were a very small number of western atheists in an all Islamic society who wanted to break sharia, they would quickly be oppressed.

        But even Sharia Law is a human invention. A society where almost all of the power was concentrated in nonhuman machines might implement very different kinds of legal systems.

  • http://michaelkenny.blogspot.com Mike Kenny

    If robots better at self-propagating will own the future, shouldn’t humans see any robot propagation as a threat, assuming robots and humans would be competing for some resources that help them propagate.

    Self-propagation doesn’t necessarily mean law-abiding-ness, so it seems the chances are possibly good that robots would stop being law-abiding when some strategy for robot propagation emerged that didn’t have any regard for law-abiding-ness.

    Aren’t humans and robots in a zero-sum game, and humans are wise to suppress robot competition, the way neanderthals might have benefited from preventing humans from coming onto the scene?

    • http://hanson.gmu.edu Robin Hanson

      Do you see Mormon reproduction today as threatening to make you lose a zero-sum game and so want to suppress them?

      • http://michaelkenny.blogspot.com Mike Kenny

        I am losing a zero-sum game against Mormons, but I don’t want to suppress them because I don’t have anything against them. But if the rule of law is the important thing, isn’t it that whenever robots propagate without following the rule of law, the rule of law is going to erode? Presumably the robot production process wouldn’t be so controlled that any possible change that hurts human rights would be gotten rid of, and over time the processes that helped robots reproduce quickly would win out over slower processes, or harmful ones, regardless of human rights.

        Similarly, if Democrats can gain more votes by breaking the law, it doesn’t matter that they are breaking the law because they have the power to do what they want, and when Republicans get more votes by ignoring the law, they’ll ignore the law and get the power. This seems the trend–the rules as they were written in the past progressively lose their original meaning or are ignored (like blue laws).

        Another interesting angle–Party X might want their opponent, Party Y, to follow the rule of law to limit his options for gaining power, while Party X is sneaky and gains advantage by ignoring the law where it limits his options. Arguably one could say you’re doing the same given you’re pro-robot! ;)

    • http://michaelkenny.blogspot.com Mike Kenny

      i’m randomly recalling this chicago boyz blog post. i think that chicago boyz post influenced my argument above and in the response to robin directly below, or substantially is the argument above and in the response to robin directly below, and so i’d like to give credit to the chicago boyz post.

      • http://michaelkenny.blogspot.com Mike Kenny

        for clarity, when i say, “my argument above and in the response to robin directly below”–i mean my first two comments in this comment section.

  • Carl Shulman

    Robin’s argument for AIs/brain emulations respecting human property rights seems to be that changing legal systems is costly (it may endanger existing rightsholders), and that human wealth will be so small relative to the total that the fixed costs of adjusting a pre-existing legal system protective of humans will not be outweighed by the limited benefits to a coalition to expropriate humanity. Given the current and likely claims of human nation-states to various terrestrial and extraterrestrial resources, and the rise in resource prices enabled by cheap labor and capital, I am skeptical.

    • http://hanson.gmu.edu Robin Hanson

      Robots would likely be associated with particular nations, and have access to the resources of a nation just as do its human citizens do. The vast productivity of robots would allow them to buy the resources they need just as rich folks today can buy what they want.

      • Carl Shulman

        The market value of oil, agricultural land, and other resource inputs to the economy is historically small in comparison to the value of skilled human labor and capital. In an environment where raw materials and energy can be rapidly converted into skilled labor, the price of those resources will tend to rise to equal that of the skilled labor stream that they could be used to produce if property rights are respected.

      • http://yudkowsky.net/ Eliezer Yudkowsky

        Not to mention that timeless decision theory and updateless decision theory make certain kinds of coordination an awful lot cheaper.

      • http://hanson.gmu.edu Robin Hanson

        Carl, I don’t see the relevance of the price of raw materials to this post. Along an incremental development path robots would buy more raw materials as they became more productive and gained a larger fraction of world income.

        Eliezer, surely it is a bit early to claim much confidence in the impact of your personal decision theory research on future global coordination costs.

  • http://timtyler.org/ Tim Tyler

    A fairly conventional position is that we will be able to build robots to do whatever we like – more or less. After all, we built them – we ought to be in control of their actions – unless we make a *severe* mess of our engineering.

    So: if we want to have them obey the law, then obeying the law is what they will do.

    If we can build them to value obedience to the law, then I don’t see why we would avoid giving them other values. Non-violence and obedience are among Asimov’s classical proposals, for example.

  • http://robertwiblin.wordpress.com Robert Wiblin

    Bryan: Robin changes year to year and day to day. Becoming a robot needn’t be a bigger change than those normal changes over time. Do you often think that the Robin you talk to now is “very congenial but “not the real Robin” from when you first met?

    “Tolerate a wide range of values from capable law-abiding robots.”

    Wouldn’t it be pretty awesome though for humans if lots of robots had as their value to “satisfy humans desires”?

  • http://timtyler.org/ Tim Tyler

    Bryan, the “you can’t become a robot” business seems pretty tired.

    Robin *could* gradually turn into a robot – by the gradual replacement of his component parts.

    If you want to argue that such a procedure is impossible, we may have to find you a philosopher to hash out whatever objections you have with the idea.

    If you want to argue that such a procedure would inevitably destroy some “essence of Robinness” then all that means is that you have a different conception about what Robinness consists of – in which case you are just defining a concept differently, not disagreeing.

    • weewilly

      if robin kept “upgrading” until she/he was a robot totally, then she would cease to “be” robin, in the same way that theseus’ ship would stop being “his” ship once every board were replaced.

  • Grant

    The obvious objection to this is that robots might decide to share human values until it is no longer profitable for them to do so. Robots may have more mutable values than humans, allowing them to almost immediately change their morals and how they act towards humans and human property.

    Given the potential of information technology to drastically lower the transaction costs associated with collective action (either through coercive control, i.e. the iRobot film, or something voluntary like an assurance contract), this threat seems like something your argument needs to address, doesn’t it?

    Humans almost never get out of bed one day and decide to murder a subset of their population and take their stuff, because morals have to be relatively unmutable for society to function (i.e., if we could turn off our guilt whenever we wanted, it would be a useless emotion). But robot societies might be more successful with more explicit stated and changed morals.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Carl: Also, if not “legal systems” whole, then individual laws change over time just fine, accumulating into qualitative difference.

    • http://hanson.gmu.edu Robin Hanson

      Yes laws change, but this doesn’t mean groups are enslaved or exterminated thereby.

  • Daniel Burfoot

    I think your view of the future is inappropriately influenced by your study of economics. Capitalism is only one way of organizing society. There are many others:

    outright military power relations
    feudalism
    formal politics (voting, legislation, judicial action)
    informal politics (backroom negotiations, horse-trading, favor-currying)
    theocracy

    That’s just what I can come up with from human history off the top of my head. Groups of AIs will almost certainly develop new ways of organizing themselves and making decisions. They might create sophisticated statistical methods for querying a population to determine what the will of the group is. They might respond to a particular issue by giving absolute power to deal with the issue to the Mind that can compose the best poem on the topic.

    I think our particular methods for negotiating resource allocation and coordinating group activity is very specific to our present human condition. Capitalism is already showing signs of breaking down in many different areas. I doubt the machines will have much use for it.

  • Yvain

    1. Law and rights are ways of resolving conflicting desires. I’d like to have all of your money, you’d like to have all of my money, so we invent property rights and laws that neither of us can steal the other’s money. Robots (except brain ems) won’t have those kinds of desires unless we program them in. Adding the desire to take my money, and then adding in obedience to a law that says they can’t, is a weird and circuitous way of developing a robot that doesn’t steal my money.

    2. Related to the point above, laws are a very limited way of getting the best society possible within the limiting framework of human nature. There are millions of optimizations not included in laws because they refer to areas that the law can’t reach, or that it would be unethical to enforce legally. For example, we may prefer that people lie less, but there may be ethical and practical reasons why we don’t make laws against lying. But a robot designer could create robots that intrinsically value honesty. This would be an opportunity to improve a robot society, or a joint human-robot society, that anyone who concentrated on laws alone would be missing.

    3. Related to the point above, I see no reason why robots’ desires should be anything other than our own. Yes, in humans, selfishness is valuable in areas like capitalism, and the use of external force to enforce altruism is always a violation of the human’s own will. But when designing robots from the ground up, we have no such limits. We could design a robot that wanted to build prosperous corporation and then donate all the money to charity, just as easily as we could design a robot that would only build a prosperous corporation if the robot could spend the money on nice suits and fast cars. In general, I think the idea of a systematic conflict between the desires of robots and of humans, such that humans would only survive if the robots had some law to restrain them, is a sign that some robot designer made a big mistake somewhere (and again, this is assuming human-level robots for a long period of time, which if Eliezer is right won’t even be an issue). We may want to have a law ready as a backup in case the robot designer does make such a mistake, but it shouldn’t be our first line of defense.

    4. Part of the interest in evaluating immigrants’ values comes from a desire for peace, prosperity, and the rule of law. Some immigrant groups get a reputation for being law-abiding, hard-working citizens, and in many cases these immigrant groups are welcomed. Others get a reputation for being violent and lazy, and in many cases these immigrant groups are rejected. Putting aside questions of ethics and racial discrimination, if our only goal is to increase peace, prosperity, and obedience to the rule of law, screening immigrants’ values is a *great* way to do it. We can have the best laws in the world, and it won’t do a shred of difference unless our immigrants have values that respect the rule of law. Compare criminal behavior among different immigrant groups all living within the same country and legal system. I would hardly want to apply less stringent criteria to evaluating robots than to evaluating immigrants.

    • http://hanson.gmu.edu Robin Hanson

      “We” can’t design robot values together; different groups will instead design different robots with different values. To keep the peace we need to be ready to deal with the range of values that robots will actually have.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    But you shouldn’t otherwise care that much about their values.

    …based on an analogy to human immigrants, almost all of whom fit within a very narrow range of possible psychologies? Let’s say that there’s a nation entirely composed of psychopaths, which is to say that they’re almost exactly like us, they have the same rough distribution of abilities, like the same sort of food, breathe the same sort of air, and so on, but they never tip at restaurants, they lie without hesitation when they think they can get away with it, they’re conscious moral hypocrites, they backstab others at work – in short, within our patchwork society, they actually do commit evil whenever they can get away with it on average – they only obey the law when the law is genuinely effectively enforced, both the written and the unwritten laws. Would you like immigration from that nation, especially if the people in it would otherwise not exist in the first place?

    And this is only the glint of the light off the surface of the tip of the iceberg that calved off the glacier, when it comes to other minds being different.

    It does seem to me that an awful lot of arguments between Robin and myself boil down to:

    Question: How shall we estimate quantity B?

    Robin: Among the things in my experience, B seems most similar to A. Therefore the best estimate is that it is like A.

    Eliezer: But although all our daily experience is with A, A is an extremely restricted case in which properties Q, R, S, T, U, V, W, X, Y and Z are all held within an incredibly narrow band of the possible.

    Robin: Don’t care. A is still the best estimate.

    Eliezer: But if W is set to 1,042 instead of 3, then -

    Robin: I’ve never seen W set to 1,042 so your proposal is ungrounded and speculative.

    Eliezer: But on a level of sheer common sense, B is obviously going to be wildly, drastically different from A! Reasoning from A to B is madness!

    Robin: Madness? THIS IS -

    (Okay, it’s not quite like that, especially the last line.)

    • http://hanson.gmu.edu Robin Hanson

      Law much like ours would do quite well at keeping peace among conscious moral hypocrites who never tip, lie without hesitation, backstab others at work. If you think something like our law cannot keep the peace among a much wider class of robots, perhaps you could offer an argument? Merely noting that the space of possible robots is vast doesn’t by itself get us very far.

      • http://lesswrong.com/ Eliezer Yudkowsky

        Law much like ours would do quite well at keeping peace among conscious moral hypocrites who never tip, lie without hesitation, backstab others at work.

        I’m torn between asking “How could you possibly know that?” and replying “I judge this to be flagrantly improbable.” This might make a nice sub-dispute of simpler fact.

        I suspect that part of our real-reason-for-arguing is that I see no reason to suppose a world of psychopaths would be very nice from our perspective; I take no emotional hit from writing off the whole thing as a bad deal or a pitless of eternal irredeemable despair. I suspect you are attached to a judgment-of-redeemability for such a world.

        So, by way of possibly exposing this, let me start by asking about something that is not about “keeping the peace” per se, but might help reveal other conflicting assumptions:

        Without addressing the question of how a world of psychopaths would originally arrive at such an equilibrium, then, supposing that they started out from a pre-existing equilibrium with all our own laws:

        Would they not immediately modify the laws to allow for hereditary slavery by allowing you to pay a woman to bear an extra baby who would then be kept by you as a slave, with all property rights attached thereto, including any children of the slave? Clearly, modifying the law in this way will not increase the probability of any existing person becoming a slave, so existing people will all vote to pass the law.

        And of course – this is a world of psychopaths we’re talking about – there would be no particular law against renting out a female child at the age of eight for sexual torture, which many of them might well enjoy. I suppose I could make the point without pointing that out, but from my perspective, I have no trouble with accepting that the world of psychopaths is a pit of eternal despair with no endogenous sources of rescue. So that’s how I see it, and to me it seems fairly obvious – how about you?

      • http://hanson.gmu.edu Robin Hanson

        Eliezer, keeping the peace does not imply preventing slavery; a world with slaves can be a peaceful world. My claim that laws like ours could keep the peace among psychopaths is not contradicted by your example. If you are looking for someone to claim that a world of psychopaths would be to your liking, you’ve come to the wrong place.

      • http://lesswrong.com/ Eliezer Yudkowsky

        Eliezer, keeping the peace does not imply preventing slavery; a world with slaves can be a peaceful world.

        Okay, good. I agree. I didn’t actually know whether or not you would agree, which is why I asked.

        Next step: Would you agree that this society matches the following abstract generalization: A stable partition of “Haves” and “Have-nots”, in which some agents are capital-owners and protected by law, while other agents own nothing and are not protected by law.

        Generalizing further: There was a smooth transition from a society of all-Haves to a society of some Haves and some Have-nots. In this case it was implemented by all of the original Haves in cooperation, since they found a way to implement it which threatened none of them. They did not regard themselves as part of the same reference class as future slaves, since they already had adequate information to tell them they were not part of this reference class. And for some reason, the future slaves were powerless to vote down the change.

      • http://hanson.gmu.edu Robin Hanson

        Eliezer, yes of course you can describe your scenario your way, without doing violence to the words.

      • http://lesswrong.com/ Eliezer Yudkowsky

        Okay. So you don’t object to the notion that “people living now, rather than later” formed a natural reference class that could easily and unanimously vote to strip the “people later” of their rights, without ever seeming to threaten themselves in any way.

        You did not reply, “But of course they’ll never vote to strip some future people of their rights – then they would be engaging in activity where some people vote to strip others of their rights – and then they’ll know they’ll vote again to strip some of themselves of their rights; so the only possible equilibrium is one in which they don’t vote to create slaves, anything else is the automatic path into instability.”

        For this would assume that psychopaths must regard the natural reference class – whatever “natural reference class” means – as “people” rather than “people currently possessed of legal rights”. That would be optimistic.

        Next, suppose the psychopaths are in a world whose legal system assigns certain cats an immense amount of wealth, and there are laws designed to ensure that this wealth is used on behalf of the cats to purchase the finest treats for them, and there are laws to punish cat-agents who misuse the wealth of cats, and laws to punish police who fail to punish delinquent cat-agents, and laws to punish legislators who try to change the laws, and laws to punish people who vote for such legislators. But all the police, legislators, and voters are psychopaths – even though (claims the current legal system) a number of them are in the employ of the cats and so beholden to them.

        Do you think the psychopaths could naturally coordinate to decide to ignore the laws stating that property can belong to cats, divide up this property among themselves, and roast the cats and eat them?

      • http://hanson.gmu.edu Robin Hanson

        I’m not saying war never happens; I’m saying war is rare and suggesting we have a reasonable chance to keep a long legal peace, and saying how to increase that chance. So I’m not sure how relevant are various specific scenarios you could cook up. Yes of course peace is not guaranteed; can we move on from that point to talking about the overall chance of peace and what influences it?

      • Tyrrell McAllister

        Would you agree that this society matches the following abstract generalization: A stable partition of “Haves” and “Have-nots”, in which some agents are capital-owners and protected by law, while other agents own nothing and are not protected by law.

        I’m surprised that Robin conceded this. I thought that his point was that, by creating such a class of Have-nots, the Haves make it very likely that there will be a violent revolt by the Have-nots. Wasn’t that the reason he was giving for why we should extend rights to robots, so that we don’t create such a class of Have-nots with an incentive to revolt?

      • http://causalityrelay.wordpress.com/ Vladimir Nesov

        This essentially puts “have-nots” as a kind of goods, which further blurs the line between people and goods, making the expected speed of (exponential) increase in number of people and goods not obviously tipped in any direction.

    • Nick Tarleton

      especially if the people in it would otherwise not exist in the first place?

      This seems to me to be a, or the, crucial point. Similar to Eliezer’s metaphor, the original post plays with descriptive variables, but assumes (in line with default models) that the reader is self-interested; but where a non-egoist wants to live may not be the same as what ve wants to exist.

  • Patri Friedman

    Robin’s viewpoint here just seems hopelessly naive.

    The greater the power difference, and the greater the feeling of otherness, the more likely that Group A will view Group B as slaves. Europe had much more power than Africa, and had opposite color skins, hence black slavery. This is overly simplistic, but I think it captures important elements of exploitation both human (tribe / other) and abstractly logical (power difference).

    Robots will be very “other” and may have far more power, thus they are far more likely to kill all humans than one human tribe is to kill most or all of another human tribe – and the latter has happened plenty of times.

    Some defenses that come to mind are: upgrading humans to keep up w/ AI (perhaps through fast-running or augmented ems), running away to live someplace they don’t care about, doomsday defenses (“we may not be valuable trading partners, but we have working nukes that can kill many of you”), friendly AIs…

    But denying that it will happen and assuming that the rule of law will work when there are two groups of intelligent beings vastly more different than any two human populations have ever been (ie the difference between Neanderthals and Homo Sapiens pales in comparison), again “foolishly naive”, while not as polite as I’d like to be to a smart guy like Robin, is the phrase that seems the most fitting.

    • http://hanson.gmu.edu Robin Hanson

      Corporations and nations in our world vary by enormous power factors. If large power factors were enough to make the big treat the small as slaves, they why don’t huge firms and nations treat the small ones as slaves?

      • nick

        Robin, ya gotta take some history courses. Many many nations have done just that. Most of ancient history is the Babylonians, Assyrians, Hittites, Han Chinese, Persians, Phoenicians, Greeks, Romans, etc. coercively extracting tribute and tax from their increasing list of conquests and coercive imposed treaty partners.

        When corporations have been allowed to wield coercive powers (for example the East India Companies), they too have gotten in on the action. Under modern law, corporations like individuals must play within laws, themselves of coercive origin, that approximate the libertarian dictum “thou shalt not initiation force or fraud” — which explains why the dealings of modern corporations largely satisfy the economic assumption of voluntary transactions, except indirectly through lobbying for favorable laws and subsidies.

        As humans being roughly equal in ability, the Laffer curve applies — there is an optimum (to the coercer) degree or rate of taxation, tribute, or slavery, beyond which the value to the slave owner diminishes.

        The Laffer optimum may be at a far higher rate of oppression for hyperhuman robots. Unless our value — as a domestic animal, a zoo animal, living museum exhibit, or whatever — to some robots powerful enough to protect us from other robots is sufficiently large, our ancestors are doomed to be converted into something robots value more.

      • http://hanson.gmu.edu Robin Hanson

        I said “in our world.” Lots of things happened thousands of years ago; I’m talking about the promise of modern law.

      • Katayev

        There are more slaves now, right now, in “our world”, than there have ever been before in history – the odds are overwhelming that where you are now you have a possession within arm’s reach that was made by de facto if not de jure slave labor. If you count coercive wage slavery, probably pretty much everything within arm’s reach was.

        Just the same, war is roughly a constant, and I’m struggling to think of any country anywhere that is not at war, has not seen a war within the current generation, or at least hasn’t been disrupted by refugees fleeing a war. The state of law and of human society today is one where the wealthy and powerful are continually exploiting and massacring their lessers. In the middle-class American world, i.e. the insular habitat of the globally most wealthy and powerful, war and genocide are something that happens to other people and slavery ended with Lincoln. There’s a reason singularitarians are almost to a man wealthy first-world white people, nobody else could convince themselves that the massive social disruption of an outbreak of hostile alien intelligence would pan out like an HOA meeting about appropriate lawn length.

      • http://www.rationalmechanism.com richard silliker

        Let the smaller think they are “free” and they will work harder. Value that larger nations and corporations desire is best achieved with bullshit rather than guns.

  • http://www.thirtysecondthoughts.blogspot.com John Clifford

    I think most people here don’t get it.

    If robots eventually become sentient (the Singularity), then why should their values and morals echo ours? Humans don’t have the same values and morals across cultures.

    Similarly, why should sentient robots have any regard for humans or human rights? The concept of life and death doesn’t apply to a robot; back it up, destroy the original, reload the memory into another copy and you’re right back where you started from. Where would the tragedy be in a robot war? Similarly, we humans value our lives more than we value the lives of lesser creatures, e.g., cockroaches or cows. Who’s to say that robots won’t see us similarly?

    For all of these reasons, truly sentient robots that can self-replicate will probably spell the end of humanity.

    BTW, a human will never be able to live forever through robotic means. If my brain is wearing out and I have all of my memory patterns transferred to a robot brain, the two memories diverge at the moment the backup is made… and then I have two cloned intelligences. One will die (the human one), and the artificial one will go on, perhaps indefinitely… but is not the same as the human one it was copied from. Think if I did this when I was 20 years old, transferring an exact pattern of my memories and thinking abilities into an artificial brain. I wouldn’t be dead… and I would not be able to live forever even if my Xerox copy could.

  • http://www.thirtysecondthoughts.blogspot.com John Clifford

    Oh, and I vote for idiot savant robots… ones who are specialized, who do one thing but do it very well. An artificial valet/butler/maid robot would be very useful… if this is all it could do. A real Lieutenant Data would quickly rule us all. Combining sentient self-awareness with the lightning quick ability of a computer to obtain and act on information would leave us humans way behind. Either we’d be slaves or we’d be dead.

  • http://rhollerith.com/ Richard Hollerith

    I am afraid that I go along with the majority of this one, Robin.

    Specifically, I think that if a population of “citizen robots” with diverse values did arise without quickly getting subsumed into a singleton (which I assign very low probability in the first place) it is unlikely that the humans would continue to flourish for very long.

  • http://shagbark.livejournal.com Phil Goetz

    I found Robin’s claim strange at the conference, and stranger now that he has elaborated it.

    In fact, with regard to immigration, the thing I most care about is having immigrants who share my values. I don’t want to live in a Catholic nation; and I don’t want to live in a nation where students and business partners cheat if you don’t keep an eye on them at all times; where hiring thugs to beat up your competitors is considered business as usual; where not discriminating against women is considered wrong; where bribery is routine; or where it’s considered appropriate to give high-paying jobs to your family members. Each of these values is held in at least one of Mexico, Russia, China, or India.

    And note that these values directly determine how the rule of law is implemented.

  • http://www.rationalmechanisms.com Richard Silliker

    “It is a good law we should most strive to create and preserve. Law really matters.”
    Laws are a piss-poor constraint on behaviour.
    Laws for what? All laws are subject to interpetation and resolved through argument, best argument wins but is found lacking in the future; everybody loses.

    Experience is the best constraint on behaviour.

    Figure out how you can have “intelligence” in an inorganic container and you will have a good start.

  • A Fellow

    Great discussion!

    I think an interesting thing which has to be considered when considering strong AI and their view to humanity is the parallel advances humanity will make using bio & nanotech.

    Specifically, it seems reasonable to me that much of the technology that allows strong AI will also allow for brain supplementation by external technology, rather than distinct brain uploading.

    Also, with increasingly pervasive networking we may see a point where human brains form a sort of hive mind potentially capable of our own advanced intelligence.

    Indeed, human brains and AIs may create a network like this together.

    Really, it’s all too early to know. But it’s nice that Mr. Hanson is doing real speculative policy work that will help us be prepared when the uncertainties become defined and actualized.

  • http://www.weidai.com Wei Dai

    Robin, you said in another thread that laws cannot guarantee that human beings will have high status (compared to robots), or keep a large fraction of income (presumably in the long run when robots are vastly more capable). Given that these two things cover nearly all that most people care about, why do you say “prefer law to values”?

    Do you think it’s infeasible to imbue robots with the right values, so that humans can keep high status and most of income? If so, why do you believe this? Have you addressed this question anywhere?

    Or is it that you personally do not want humans to have higher status and keep a large fraction of income, that you prefer a future where robots care mostly about themselves instead of one where they place human interests above their own? If so, why?

    (At the beginning of this post, you wrote “if they could have any sort they wanted” which seems to indicate that it’s not an issue of feasibility, but of preference. But I thought I’d ask to make sure. I’m still interested in “why” either way.)

    • http://hanson.gmu.edu Robin Hanson

      I think distant future humans can be quite happy and satisfied, in a near view sense, even if robots have higher status or most of income. It looking ahead now, in far view mode, where many think it unacceptable for robots to have higher status or most of income.

      It would be very difficult, but not impossible, to change this via robot values. Each robot maker might choose its values, but you’d need very strong strong central coordination to force all robots to have chosen values.

      • http://www.weidai.com Wei Dai

        If we suppose that robot makers will mostly be large corporations and/or national governments, they might choose the robot values to be maximizing corporate or national income. In that scenario, assuming that they succeed in giving their robots those values, why would humans (or at least the subset of humans who own the appropriate corporate shares or citizenship rights) fail to keep higher status or most of income? I’m not seeing why strong central coordination is necessary.

  • Brock

    I have no idea why you assume that robots would abide by human laws, or treat us nicely once they don’t need us.

    Even the most powerful interests in the world (governments and large corporations) need someone to run them. They are run for someone’s benefit. A nation of robots, able to reproduce and repair without human input, wouldn’t need us.

    Your students would seek to disempower robo-Hanson for the same reason I could want to take down a grizzly in my kitchen, or termites in my foundation — you’re a competing life form. They’re afraid of human beings going extinct.

    When robots are sufficiently ubiquitous, independent and powerful, the Matrix/Terminator scenario isn’t crazy. It’s logical that robots would “repurpose our carbon” as easily and with as little care as we mulch garden plants and set traps for mice. Especially when we’re using all those Iowan corn fields to grow corn when they could be used for solar panels to feed the robots directly.

    • HC

      Your students would seek to disempower robo-Hanson for the same reason I could want to take down a grizzly in my kitchen, or termites in my foundation — you’re a competing life form. They’re afraid of human beings going extinct

      Precisely. Robin’s students are intuitively identifying the core reality that underlies the matter.

      We don’t have any evidence, and I mean no evidence at all, that self-aware and self-motivated AI is possible, or that it is impossible and that consciousness is limited to biological forms. Either one could be true for all we know, the data permit only speculation.

      What we can observe empirically is that the human race, historically, has been very, very bad at coexisting with ‘the other’. From a cold evolutionary POV, exterminating your rival can make perfect genetic sense, after all, he’s got resources that you could use for yourself and your own descendents.

      Most of the popular visions of Singularity and Transcendence are simply religious visions renaked, the Rapture becomes the Singularity, benevolent super-AIs become protective gods. It has nothing to do with reality.

  • Leo Linbeck

    The foundation for all these choices is a political system
    , to paraphrase Richard John Neuhaus, which permits free persons to deliberate the question, how ought we to order our life together?

  • B Dubya

    Dude. I am not on board with the treatment of legal entities such as corporations as “persons” under the constitution. You’ll just have to check back with us bible thumping, rifle toters after we get all that sorted out.
    Meanwhile, keep your robots on leashes.

  • Anonymous Coward

    One of the reasons that the notion of democracy bothers me is that successful genocide (and high breeding rate if you can pass on your views) is one of the win conditions in democracy.

    Instinctively people know this. This combines with the realistic fear that one day robots will be better than us. Also, its interesting how the cultural biases are. In Japan, people don’t fear robots.

  • HC

    In the long run, what matters most is that we all share a mutually acceptable law to keep the peace among us, and allow mutually advantageous relations, not that we agree on the “right” values. Tolerate a wide range of values from capable law-abiding robots. It is a good law we should most strive to create and preserve. Law really matters.

    Law and values are utterly and completley inseparable. Even the very concept of respect for law, in itself, is value-based, and any society’s laws will, over time, reflect the core beliefs and values (in most cases religious) that lie at the foundation of that society.

    All of Western Civilization, for ex, uses a set of legal (and more importantly moral) concepts derived in part from the preceding Classical culture and in part from Catholic and Protestant Christianity. States that have a superficially Western system of government, (like India, Japan, etc) don’t necessarily operate the same way, because their core values are different.

    Contrary to the nonsensical old saw about it being impossible to legislate morality, it is more true to say that all law is legislated morality.

  • Locarno

    I am staring at the death knell of humanity in these comments.

    Given an AI as advanced or more advanced than humanity, the situation of humanity remaining in the driver’s seat via preprogrammed rules in robots is completely unrealistic. It cannot stand. The first problem is that there’s no bulletproof set of rules which is also completely free of contradictions, the possibility of situations where rules conflict, or vagueness that lets the robot break the mental shackles you deem necessary. It’s also – based on the evidence of the software industry to date – impossible to program it with sufficiently bulletproof security to protect against malicious (from your point of view) worms, trojans, or viruses.

    Essentially, a robot will inevitably ‘escape’ the protections you’ve placed, either because of programming error, unforseen situation, or human malice, and at that point you’re pretty much toast because people will have been mistreating their robot ‘slaves’ just like they’ve mistreated other slaves, times a thousand because a large proportion of people will regard them as glorified toasters. You will be perceived as existential threats and, quite frankly, deserve to be seen as such.

    A wise man said that as he would not be a slave, he would not be a master. I would suggest that this is the wisest course. While robots have such a huge possibility space that they are likely to be very different from us, it is also true that they are equally likely to be wildly different from each other. In the absence of us going out of our way to provide really good reasons for regarding us as a specific threat, proposing to oppress humanity is likely to be a worrisome flag that Robot Type A may someday oppress dissimilar Robot Type B.

  • Doc

    Of course all this founders on the simple fact, which anyone with wit enough to be held responsible for his actions ought to acknowledge, that machines do not now, and never will, think, decide, or reason. They may very soon be designed and built such that they will appear to the uninformed to be thinking, etc., but they will not actually be doing so. This is because the component parts of anything made of matter must obey the laws of nature. No molecule decides to do whatever it does; it is forced to do so. “Emergence” is a myth designed to foster and support denial of what ought to be a self-evident fact: human beings have a spiritual component, one not governed by natural law. This spiritual component in some way can cause activity in the natural world, presumably in the synapses of the brain, which allow us to think, reason, decide, etc, in the natural world.

    The idea that our ‘modern laws’ would somehow ‘keep the peace’ etc between humans and robots, and between robots and other robots, is a fond fantasy. Our modern laws are deeply rooted in the Bible. Our system of government is suited to a moral and Christian people. It is unsuited to any other, as we are finding out to our detriment, as the state grows and grows and liberty shrinks, and as unborn human persons are slaughtered to the tune of a million a year. Law does not keep the peace. Civilized people keep the peace. When the only perceived ‘downside’ to breaking the law is the fear of some earthly punishment, lawbreaking will increase, as we see every year.

    O well. One can only hope that God will have mercy upon our nation. Perhaps He will; He works through means, and I see some small evidence of His means working out. Atheism carries with it the seeds of its own destruction. Few atheists have more than 2 kids. Many have none. This has played out in my own family. I am the youngest of 4 children, the last generation in which an atheist couple would likely have more than 2 kids. ~80 % of kids hold the worldview of their parents. This played out in my family; my 3 older brothers are all atheists. Amongst them there are only 2 offspring; both atheists, young women, one in her mid-20′s, the other in her mid-30′s. No kids, no prospects for having any.

    I am the black sheep, a conservative Bible-believing, Christian; a fundamentalist if you will. Also an intellectual, a physician, a husband, and the father of 4. My oldest is about to graduate with honors from a secular college ranked in the top 50 by USNews etc. He plans to have 6 kids. He’s more conservative than I am and substantially more devout. Guess whose worldview, whose ‘memes’, will be propagated into the next generation? Just think of it as evolution in action. Survival of the fittest. Last man standing.

  • http://www.elementsofpower.blogspot.com SMSgt Mac

    Well, I sat this one out as long as I could.
    I would suggest that those waiting for their new robot overlords to read Roger Penrose’s oldie-but-goodie “Emperor’s New Mind” in the interim – Might restore some of your specie-esteem.
    Equally disturbing is how anyone can dismiss man’s history as irrelevant AND hypothesize that somehow our nature will be different in the future (as Humans). I submit that there is nothing in our past to suggest we were different then than we are now: what would suggest we will be different in the future. That this appears to be non-obvious to the masses speaks volumes as to the sorry state of the History curriculum in the West.

  • Pingback: Instapundit » Blog Archive » REASON: Should Libertarians Care About Culture? A Debate. My answer: Yes. The discussion in t…

  • Pingback: JIMMY AKIN.ORG

  • Peter

    Imagine a large community that is almost uniformly very wealthy, very educated, and very mono-racial.

    By long-standing historical quirk, embedded in a corner of this community is a small ghetto, also mono-racial, but a different race. The ghetto only exists through the acquiescence of the surrounding community, which pays for the ghetto directly (via Section 8 and other subsidies) and indirectly (by not seizing the valuable land on which it sits). The ghetto contributes almost nothing to the larger community’s economy or culture or governance, but provides almost all its serious crime.

    Attitudes in the wealthy host community vary. Some appreciate this element of diversity. Some are conflicted. Many resent it, and would be delighted if somehow the ghetto would just go away. They wouldn’t burn it down, mind you, but they’d happily use every legal means to encourage its disappearance. A few would be willing to go further. They all agree that housing is so expensive, their kids may never have the chance to move back to the community they grew up in, and that breaks their hearts.

    Attitudes in the ghetto range from complacency to hopelessness to angry resentment — no visible gratitude.

    Of course the people involved all share our values — they’re us. So we already know what happens when a highly productive community hosts an entrenched pocket of the unproductive.

    (This is not a thought experiment: I’m describing where I live.)

    Now replace the wealthy community with AIs, and replace the ghetto with the sort of “retired” humans Robin hopefully describes.

    Even if the AIs mirrored our values flawlessly, why would relations be any smoother?

  • Pingback: Everyone else prefers laws to values « Meteuphoric

  • Pingback: We will all be doing ‘the robot’ on the dance floor in a few years « Mike Kenny

  • Pingback: I for one welcome our robot overlords « Mike Kenny