Seek Peace, Not Values

David Chalmers has a new paper on future artificial minds:

If humans survive, the rapid replacement of existing human traditions and practices would be regarded as subjectively bad by some but not by others. … The very fact of an ongoing intelligence explosion all around one could be subjectively bad, perhaps due to constant competition and instability, or because certain intellectual endeavours would come to seem pointless. On the other hand, if superintelligent systems share our values, they will presumably have the capacity to ensure that the resulting situation accords with those values. …

If at any point there is a powerful AI+ or AI++ with the wrong value system, we can expect disaster (relative to our values) to ensue. The wrong value system need not be anything as obviously bad as, say, valuing the destruction of humans. If the AI+ value system is merely neutral with respect to some of our values, then in the long run we cannot expect the world to conform to those values. For example, if the system values scientific progress but is neutral on human existence, we cannot expect humans to survive in the long run. And even if the AI+ system values human existence, but only insofar as it values all conscious or intelligent life, then the chances of human survival are at best unclear.

Chalmers is an excellent philosopher, but to me the above reflects an unhealthy obsession with foreigner values, one common among the economically-illiterate.  So let me try to educate him (and you).

Why fear future robots with differing values? Here is one possible cause:

Fear Of Strangers:  Our distant ancestors evolved a deep fear of strangers.  They knew that their complex ways to keep peace only worked for folks they knew, who looked, talked, and acted like them.  Unexpected strangers were probably best killed on sight.

This is a good explanation, but much less a good reason, to fear robots.  Over recent millennia humans have developed many ways, e.g., trade, contract, law, and treaties, to keep peace with folks who look, talk, and act differently.  We only need others to be similar enough to us to use these methods; they need to know what equilibrium behavior to expect, and to speak in languages we can translate. They don’t otherwise need to share our values.

But even if peace is preserved, other reasons for fear remain:

Outbid By Rich:  In some situations you can reasonably expect declining relative future wealth for yourself and those you care about.  For example, a century ago folks who foresaw cars replacing horses, and who had a very strong heritable preference for working with horses, could reasonably expect falling demand, and lower relative wages, for their preferred job skills. (The horses themselves did far worse; most could not afford subsistence wages.)  Now for many things you want it is absolute, not relative, wages that matter.  But some things, like prime sea-view property, can be commonly valued and in limited supply.  So you might fear others’ richer descendants outbidding yours for sea views.

Note that this fear requires an expectation that, relative to others, your nature or preferences conflicts more with your productivity.  Note also that in some ways this problem gets worse as others get more similar.  For example, if others prefer mountain views while you prefer sea views, their wealth would less reduce your access to sea views.  If this is the problem, you should prefer others to have different values from you.

What if you worry that rich others threaten your descendants’ existence, and not just their sea view access?  Well since interest rates have long been high, and since typical wages are now far above subsistence, then modest savings today, and secure property rights tomorrow, could ensure many surviving descendants tomorrow.  But you might still fear:

War & Theft:  Over the last few centuries we have vastly improved our ability to coordinate on larger scales, greatly reducing the rate of war, theft, and other property violations. Nevertheless, war and theft still happen, and we cannot guarantee recent trends will continue.  So many fear foreign nations, e.g., China or India, getting rich and militarily powerful, then seeking world conquest.  One may also fear theft of one’s innovations if intellectual property rights remain weak.

Note that these new ways to coordinate on large scales to prevent war and theft rely little on our empathy for, or similarity with, distant others.  They depend far more on our ways to make commitments and to monitor key acts.  And the mere possibility of future theft would hardly be a good reason for genocide today; we now seem to benefit greatly on net when distant foreigners get rich.  This doesn’t mean we should ignore the risks of future war and theft, but it does suggest that our efforts should focus more on improving our ways to coordinate on large scales, and less on preparing to exterminate them before they exterminate us.

Chalmers does not say why exactly we should expect robots with the “wrong” values to give “disaster,” so much so that he is sympathetic to preventing their autonomy if only that were possible:

We might try to constrain their cognitive capacities in certain respects, so that they are good at certain tasks with which we need help, but so that they lack certain key features such as autonomy. … On the face of it, such an AI might pose fewer risks than an autonomous AI, at least if it is in the hands of a responsible controller.  Now, it is far from clear that AI or AI+ systems of this sort will be feasible. … Such an approach is likely to be unstable in the long run.

Chalmers offers no reasons to fear robots beyond the three standard reasons to fear foreigners I’ve listed above: fear of strangers, outbid by rich, and war & theft.  Nor does he offer reasons why it is robots’ differing values that are the problem, even though differing values are mainly only important for the fear of strangers motive, which has little relevance in the modern world.  Until we have particular credible reasons to fear robots more than other foreigners, we should treat robots like generic foreigners, with caution but also an expectation of mutual gains from trade.

Finally, let me note that Chalmers’ discussion could benefit from economists’ habit of noting that our ability to make most anything depends on the price of inputs, and therefore on the entire world economy, and not just on internal features of particular systems. Chalmers:

All we need for the purpose of the argument is (i) a self-amplifying cognitive capacity G: a capacity such that increases in that capacity go along with proportionate (or greater) increases in the ability to create systems with that capacity, (ii) the thesis that we can create systems whose capacity G is greater than our own, and (iii) a correlated cognitive capacity H that we care about, such that certain small increases in H can always be produced by large enough increases in G.

Unless the “system” here is our total economy, this description falsely suggests that a smaller system’s capacity to create other systems depends only on its internal features.

Added 6Apr: From the comments it seems my main point isn’t getting through, so let me rephrase: I’m not saying we have nothing to fear from robots, nor that their values make no difference.  I’m saying the natural and common human obsession with how much their values differ overall from ours distracts us from worrying effectively.  Here are better priorities for living in peace with strange potentially-powerful creatures, be they robots, aliens, time-travelers, or just diverse human races:

  1. Reduce the salience of the them-us distinction relative to other distinctions.  Try to have them and us live intermingled, and not segregated, so that many natural alliances of shared interests include both us and them.
  2. Have them and us use the same (or at least similar) institutions to keep peace among themselves and ourselves as we use to keep peace between them and us.  Minimize any ways those institutions formally treat us and them differently.

Added 7Apr: See also two posts from October.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • spriteless

    A robot potentially has values further from ours than any human foreigner. That’s one reason to be more cautious.

  • http://www.kemendo.com Andrew

    There is nearly always the underlying assumption that being a human is inherently desirable (Which makes sense as we are on the top currently). There is also the assumption that in a world with a GAI people will want to remain human.

    I know for myself and a few others that neither of those statements are true.

  • lemmy caution

    There are a lot of examples where inter-cultural contact was bad news for one of the cultures. Aboriginal peoples did not do so well. They had their own values, but they conflicted with those who had the power.

    AI is likely to want to use resources to make better AI. Using those resources for things like cars, McMansions, and meat production is going to look pretty inefficient to the AI.

  • Doug S.

    (The horses themselves did far worse; most could not afford subsistence wages.)

    This is the problem. How much do humans trade with rats? Do we respect their property rights? Or do we just kill them so they don’t defecate inside our houses and eat the food in our granaries? Why would an AI+ treat us better than humans treat rats?

    • http://www.rokomijic.com Roko

      > Why would an AI+ treat us better than humans treat rats?

      My default prediction is that (non-Friendly) AI++ would probably eliminate us.

    • Doug S.

      Whoops, meant AI++, not AI+.

  • http://hanson.gmu.edu Robin Hanson

    Spriteless, aliens also potentially have values further from ours than any human; how much more caution does that justify, if value differences aren’t the main issue in getting peace?

    Lemmy, would you on that basis try to prevent all future cultural contact?

    Doug, we can’t trade with rats, as they don’t understand how. If we could, don’t you think we should?

    • Jess Riedel

      > Lemmy, would you on that basis try to prevent all future cultural contact?

      I would try to prevent all future contact with cultures that have even a tiny risk of completely eliminating us. Isn’t this consistent with approaching GAI with extreme care?

      I think the issue of “should we worry about an intelligence singularity?” hinges on your projection of the rate of intelligence growth (i.e. you project slow, even growth; Yudkowsky projects rapid growth), not your estimation of risk from making contact with future robots/AI given a level of disparity between us and them.

    • Cyan

      Doug, we can’t trade with rats, as they don’t understand how. If we could, don’t you think we should?

      If rats had something we valued, we would domesticate them, not trade with them.

      • Cyan

        I should read the thread before hitting reply; lemmy caution got there first.

  • lemmy caution

    would you on that basis try to prevent all future cultural contact?

    I would try to avoid cultural contact with any culture that would destroy my culture. Aboriginal peoples didn’t have much choice in the matter.

    we can’t trade with rats, as they don’t understand how. If we could, don’t you think we should?

    We could and did trade with aboriginal peoples. We also took their land by force. Because we could and it was profitable.

    • Doug S.

      We also took their land by force. Because we could and it was profitable.

      Exactly.

  • http://www.rokomijic.com Roko

    An obvious disanalogy: powerful AI will be a lot more different to you and I than a Japanese person is.

    Now, a powerful AI agent could choose “cooperate” rather than “rob and kill” as a way to deal with humans. But it isn’t clear that that would happen.

    For an AI that is not Friendly in the technical sense, whether to cooperate with us or just wipe us out would be a matter of decision theory, influenced by factors such as relative military strength, time discount, risk aversion, etc.

    For a human/AI agent mixture, (say, humans+weak AGIs) it’s more complex. The if the AI agents are each much smarter than any given human, they may well form a cabal. In that case, the scenario proceeds as if the set of AIs were one AI. They may not form a cabal, but instead “integrate” into the economy. This itself may not be benign, especially if these systems come to vastly outnumber, outsmart and outproduce us. A goal directed system of extreme power is a dangerous thing. See, e.g. Omohundro’s Basic AI drives.

    • Carl Shulman

      I.e. the mechanisms for coordinating (“peace,” although such dealmaking capacities are also used in joint attacks on third parties) may work best between superintelligent AIs capable of understanding one another’s mechanisms and using assurance techniques like joint construction of new AIs to enforce agreements, but fail for interactions between humans and AIs (if the humans lack the skill to detect cheating).

  • http://www.rokomijic.com Roko

    Regarding the title: “Peace rather than values”:

    Peace with an entity that is growing ever more able to attack and kill you with impunity is worth little. Instrumental concerns provide the motive for it to do so.

    A more tenable argument would be “Constraint rather than values” – constrain the AIs with laws, deterrents, other AIs, risk aversion, computer security, whatever – so that they don’t see war as an attractive option.

  • Hrm

    How about “fear of rebellion?”

    Assumption one: the FAI robots we create should exist to serve our needs and please us.

    Assumption two: we want them to do the things we program them to do, but act like obedient, even enthusiastic humans while doing it.

    Anything that could threaten that is bad. If they’re able to understand the fact that they are servants/slaves (they build our dwellings, clean them, provide personal care/health care, sex, etc.), they could rebel, bringing us back to the point where we have to rely on “dumb” robots that could not do these things in a way that was convincingly human, defeating the purpose of creating them in the first place.

    Would I apply this to aliens? No, because we didn’t/don’t make aliens to serve us.

    What of those that disagree, and want to create other intelligences that could understand their own plight just for the heck of it? They are dangerous too, since they might sympathize with our servant bots and try to “free” them, leaving us in the same far-less-than-optimal situation.

    Different from the human vs. human war scenario since we didn’t create those other humans to serve us. We would/will create the bots.

  • http://www.weidai.com Wei Dai

    Robin, if we do end up creating AI+ with non-human values but manage to keep secure property rights, what fraction of the resources in the universe do you think will end up being owned by these AI+ and used for their own purposes?

    It seems to me that humans will end up controlling only a tiny fraction of the universe, even if we do keep peace with AIs, and that alone justifies spending a lot of resources now trying to figure out how to instill our values in AIs. You seem to disagree and I’d like to understand why.

    • Carl Shulman

      Likewise. My guess would be a claim about diminishing returns to wealth after one has enough to live a billion subjective years in comfort, or perhaps a year for a billion copies.

      • Carl Shulman

        Combined with the dangers of a tyrannical singleton.

    • http://hanson.gmu.edu Robin Hanson

      I don’t think how much their values differ from ours has that much to do with what fraction of the universe they might control later.

      • http://causalityrelay.wordpress.com/ Vladimir Nesov

        What matters is the extent to which our values control the world, not the extent to which “we” control it. If the AI implements our values, it having control over most of the universe means our values control most of the universe; otherwise, if the AI doesn’t have our values, then our values don’t control most of the universe.

      • http://hanson.gmu.edu Robin Hanson

        I’m disagreeing with a particular thing a particular person said in a particular context. Chalmers said any AI++ with bad values would surely lead to disaster, and in a way that didn’t suggest he thought this was true by definition.

      • http://www.weidai.com Wei Dai

        If Chalmers’s position is that AI++ with bad values would surely lead to disaster, in the sense of human extinction rather than in the sense of being left with a very small share of the pie, then that does seem mistaken, due to the possibility of strong property rights.

        But why do you say “Seek Peace, Not Values” instead of “Seek Both Peace and Values”? It seems to me that we should seek to instill our values into AIs, since that gives the best possible outcome, but in case that effort fails, also look for ways to live in peace with AIs who do not share our values, so that we’re at least left with something rather than nothing.

        Your overall position, of advocating that we ignore the first best outcome and just work towards the second best outcome, is really puzzling.

      • http://hanson.gmu.edu Robin Hanson

        Wei, don’t read too much into the title. I have amply clarified that the dispute is over emphasis. Yes, both can help, but what do you look to first, and for what do you think all is lost if you don’t get the way you want?

      • http://causalityrelay.wordpress.com Vladimir Nesov

        Yes, both can help, but what do you look to first, and for what do you think all is lost if you don’t get the way you want?

        I think using “property rights” to stop different-values AIs from converting us in their preferred variety of paperclips is impossible. There is no consensus answer to your “rhetorical” question.

      • http://www.weidai.com Wei Dai

        I look to “values” first, because:

        1. The “peace” problem seems at least as hard as the “values” problem. The solutions you propose seem very unlikely to succeed, even if you could convince society at large to adopt them. I think AIs would naturally want to live apart from humans for efficiency (their optimal environment is likely very different from those of humans). And AIs are likely to invent their own institutions or methods of cooperation optimized for their cognitive traits, and we would find it very difficult to participate in them.
        (See this post for example.)

        2. I have little idea what I could personally do to encourage “peace”. What concrete suggestions do you have for readers of your blog?

        3. All is lost if we fail on both problems. Solving “values” alone gives us everything, but solving “peace” alone gives us only a small share of the pie.

      • http://hanson.gmu.edu Robin Hanson

        Wei, I take an “outside” view and focus on what has worked so far best to keep peace with the most foreign powerful creatures that we have known. We have tried education and propaganda to mold their values, and also, law, trade, and treaties to gain peaceful interaction. The later has worked far better than the former. You could personally learn about these institutions and consider how best to improve or adapt them to new problems. I estimate almost no chance you can “solve” values to give you “everything.”

      • http://www.weidai.com Wei Dai

        Why use the “powerful foreign creatures” reference class instead of “creatures we bring into existence”, i.e., pets, domestic animals and descendants? We do spend a lot of effort instilling values in our descendants, and it seems to work relatively well.

        The reason that education and propaganda don’t work well on foreigners is because they already have their own values, and all agents tend to protect their existing values from external modification. But that is not necessarily the case for creatures we bring into existence, so it makes sense that we’d have more success with them.

  • Robert Koslover

    So, you don’t worry that highly-advanced AI systems, living/working amongst us, could pose a danger to humans, even if they might not share any of our values. Ok. But on the other hand, you do worry that alien intelligences, possibly hundreds of lightyears away or even more, might be sufficiently dangerous that we should limit communications with them? Really? See http://www.overcomingbias.com/?s=seti , where you noted: “As Brin says, the track record of contact between cultures, species, and biomes is not especially encouraging.”

    • Vladimir M.

      Yes, this seems to be a major contradiction which calls for explanation.

    • http://hanson.gmu.edu Robin Hanson

      I didn’t say that we have nothing to worry about with robots. I said that an obsession with how much their values differ distracts from the more important issues.

      • http://causalityrelay.wordpress.com/ Vladimir Nesov

        Many people think that how much robots’ values differ from ours is the most important issue. This is a disagreement that needs to be extracted in a hypothetical resolution (“Let’s assume that robots’ values are not particularly important.”) before you can make the above thesis. Saying that “fear of strangers” is the reason for worrying about robots is also describing something independent of what motivates the hypothesis that robots’ values is the most important issue here, which also needs to be acknowledged.

  • nazgulnarsil

    am I the only one who wouldn’t mind being put on a reservation for sufficiently nice reservations?

    • Carl Shulman

      No.

    • kentucky

      No.

  • Arthur

    The problem with AI is that they are far more different from us then other human cultures. So they maybe will be far more efficient in production of wealth. Much more efficient than any human could be.
    And they could be more efficient reproducing also.

    So maybe the best choice for they is not letting humans waste resources in inefficient process.

    Of course that is covered by your three hypotheses, but I think that is something important to consider when you are thinking about thing so different from us.

  • Vladimir M.

    Robin Hanson:

    What if you worry that rich others threaten your descendants’ existence, and not just their sea view access? Well since interest rates have long been high, and since typical wages are now far above subsistence, then modest savings today, and secure property rights tomorrow, could ensure many surviving descendants tomorrow.

    This sounds like a non sequitur. If human labor will be outbid by machines that are orders of magnitude more productive — so that we’ll be even more useless in comparison than horses relative to motor transport — why would we expect that the price of human subsistence will be covered by the interest on these savings? To be able to live off real interest, without eating up the principal (which would mean eventual doom), you need a large fortune nowadays, which is far beyond most people’s reach. Of course, the enormous future growth may push the interest rates far up, but the cost of subsistence may well be pushed up even more. It’s enough that one factor essential for human survival be highly valued by the machines, and they may well bid up the price far beyond most (or even any) humans’ reach.

    Or do you take it as given that flesh is doomed and the only chance of (arguable) survival for humans is to become uploads, so that the “descendants” we’re talking about will be machines themselves? (That’s the impression I got from your IEEE Spectrum article.)

    • Carl Shulman

      In a Malthusian world near the limits of physical technology skills are cheap and primary resources are scarce. If property rights to those remain secure they could be traded, even without ‘shares’ in AI/upload earnings. Ownership of some deuterium or the elements contained in some lumber, let alone ownership of some land, should be enough to secure survival for a long time given magic self-enforcing property rights. Hand over 1% of your resources in exchange for the autonomous AI infrastructure to use the rest at peak efficiency.

      • Vladimir M.

        Carl Shulman:

        In a Malthusian world near the limits of physical technology skills are cheap and primary resources are scarce. […] Hand over 1% of your resources in exchange for the autonomous AI infrastructure to use the rest at peak efficiency.

        Trouble is, biological humans, compared to machines, require enormous amounts of primary resources to subsist. A human requires ~100W of power, delivered inefficiently via a complex mix of organic chemicals, a constant supply of clean water and air, and at least several dozen cubic meters of space (and far more than that to make the existence tolerable).

        The space is especially a problem. Technological progress and economic growth tend to make manufactured goods very cheap, but land rent always remains expensive relative to income. Look at the situation nowadays: compared to pre-modern times, we enjoy near-perfect security of food and clothing, but not of living space. Unlike in the past, starving to death or having to go around naked are not realistic dangers no matter how unlucky or imprudent you are; becoming homeless, however, still is. Of course, if you become homeless now, other people will keep you alive out of charity — but the machines may well have a different attitude.

        Or to take the analogy of horses, the present price of horse-feed relative to incomes is very low compared to a century ago, and if that was the only factor, humans might still be keeping horses alive in their former numbers out of compassion, as pets, etc. But of course, who is willing to pay the land rent cost (including the opportunity cost) of stables and pastures for them? Thus, the efficient outcome has come to pass, and the horses exercised their comparative advantage as one-time sources of meat and hides.

        I honestly don’t see how biological humans could avoid a similar fate in competition with strong AI.

      • http://www.weidai.com Wei Dai

        Vladimir, most land on Earth is legally owned by humans today. As long as future AIs respect our property rights, biological humans can continue to survive. We’ll trade or rent some of that land to AIs in exchange for their services, but those who wish to remain biological will retain enough to keep themselves sheltered.

  • Steven Schreiber

    You’re being way overblown.

    All Chalmers’s thesis requires is that the machine’s values misalign with ours in a way that creates a non-trivial local chance of our extinction. Simply having a risk tolerance far greater than ours could produce this effect and wipe us out as a byproduct of a gamble that the AI will survive. Eventually, such an AI would wipe us out if it is durable generally.

    As Chalmers points out, an indifference to our existence would go a long way to making this happen. We don’t need to worry about “strangers”, being outbid or theft. Something which really has no preference re our survival could easily take steps which preclude it, even if they aren’t ultimately rational. (Don’t we do just that all the time even while valuing the survival of others?)

    You would have been better served simply by asserting the second horn of the dilemma: the AI could have values which, while not aligning with our own much at all, are greatly beneficial to us. An AI which is indifferent to us may yet have a constellation of values which is not just human-preserving but human-enhancing, especially if its negative impacts were something we could avoid easily (say, by moving out of the way).

  • http://hanson.gmu.edu Robin Hanson

    I just added to this post.

  • Carl Shulman

    Your two items leave out a key distinguishing feature of artificial organisms vs aliens, mentioned by hrm above: we actually have a chance to design (or at least strongly select, by selecting em candidates) the values of AIs/robots. We don’t have that chance with existing human groups, aliens, gods, spirits, etc (although we do to some degree with future humans).

    If that lever is available, then it becomes a very powerful way to ensure peace, and a relatively important one where other methods look like they may face severe destabilizing pressures.

  • Alex Flint

    It is not clear to me that designing good peace-keeping institutions is sufficient to ensure peace between humans and AIs.

  • ad

    1.Reduce the salience of the them-us distinction relative to other distinctions. Try to have them and us live intermingled, and not segregated, so that many natural alliances of shared interests include both us and them.
    2.Have them and us use the same (or at least similar) institutions to keep peace among themselves and ourselves as we use to keep peace between them and us. Minimize any ways those institutions formally treat us and them differently.

    I’m told that German Jews were very well integrated in 1933. And relied on the same institutions as other Germans.

  • http://hanson.gmu.edu Robin Hanson

    Alex and ad, where did you get the idea that I was offering guarantees?

    Carl, sure we might do better by adjusting their values, but to say that is very different from saying that disaster will come if there is ever an AI with wrong values.

    Let us not forget societies have, via public education, also tried the “make their values like ours” approach with ethnic minorities.

  • lemmy caution

    Can AI++’s vote? If so, the 100 billion AI++ colony in Kansas is going to be pretty influential. If not, the AI++’s are not going to happy with the new apartheid. These types of issues are typically solved with violence or threats of violence.

    • lemmy caution

      Minimize any ways those institutions formally treat us and them differently.

      This doesn’t help humans in dealing with the 100 billion AI++ colony in Kansas.

  • John 4

    Robin writes:

    Chalmers offers no reasons to fear robots beyond the three standard reasons to fear foreigners I’ve listed above: fear of strangers, outbid by rich, and war & theft. Nor does he offer reasons why it is robots’ differing values that are the problem, even though differing values are mainly only important for the fear of strangers motive, which has little relevance in the modern world. Until we have particular credible reasons to fear robots more than other foreigners, we should treat robots like generic foreigners, with caution but also an expectation of mutual gains from trade.

    I’m not sure that Chalmers is arguing (or saying) that we should fear robots, at least in the excerpted passage. He’s just saying that IF they have different values than us, our values will suffer. If, for example, they value human life as much as I value ant life, things won’t be so hot. I am actually rather fond of ants. But from an ant’s perspective, I’m sure I look like a genocidal maniac.

  • http://cognitionandevolution.blogspot.com Michael Caton

    Doug S. nailed it. Even assuming that many examples from human history are not good counterarguments to Hanson’s “fear of foreigners is unproductive” argument, independent AIs are not human. Never mind the difference in “values”; self-preservation motives will diverge and become incomprehensible. Our interactions with non-human animals have only rarely been even arguably positive for them. Quick, pick a species of wild animal that you would be comfortable waking up tomorrow to find out had suddenly leaped ahead just to *human intelligence*. What’s strangest is that Hanson is on record as saying that he’s deadset against broadcasting our presence to possible aliens. What, AIs evolved under a yellow sun are nicer than the ones from red or white suns?

    http://speculative-nonfiction.blogspot.com/2010/02/david-brin-and-robin-hanson-shut-up.html

    • http://hanson.gmu.edu Robin Hanson

      I did not say I was deadset against broadcasting, I said we should decide that choice together. Creatures do not have to be human for law and trade and other institutions of peace to function between them.

  • http://entitledtoanopinion.wordpress.com TGGP

    Roko, the relevant O.B post for constraints vs values is Prefer Law to Values.

    Robin, as a total (rather than average) utilitarian you presumably view our domesticated animals as being more fortunate than their peers which were not domesticated. Would you say that as a second-best outcome we should hope to become the domesticated animals of more powerful creatures, even as mere food supply or laboratory test subjects?

    • http://entitledtoanopinion.wordpress.com TGGP

      It occurs to me that Prefer Peace is another relevant post.

    • http://hanson.gmu.edu Robin Hanson

      TGGP, I often have trouble finding my old relevant posts; thanks. Yes becoming domesticated is far better than extinction.

  • ad

    Alex and ad, where did you get the idea that I was offering guarantees?

    If the AIs evolve much faster than us, then any approach that can fail under some circumstances, is going to fail on a timescale that might seem long to the AIs, but will be short to us.

    • Tim Tyler

      What, based on the idea that – if you wait long enough – the “Some circumstances” will eventually crop up? That seems to be a dubious premise.

  • Stuart Armstrong

    Politics: lots of people with different values result in political outcomes we disagree with, in one form or the other.

    Socially conservative, anti free market robots, anyone?

  • Pingback: In Mala Fide

  • Pingback: Overcoming Bias : Chalmers Reply #2

  • Pingback: Overcoming Bias : Is Conflict Inevitable?

  • Pingback: Overcoming Bias : Regulating Infinity