Robot ethics returns

People are often interested in robot ethics. I have argued before that this is strange. I offered two potential explanations:

  1. Ethics seems deep and human, so it’s engagingly eerie to combine it with heartless AI
  2. People vastly misjudge how much ethics contributes to the total value society creates

A more obvious explanation now: people are just more interested in ethics when the subject is far away, for instance in the future. This is the prediction of construal level theory. It says thinking about something far away makes you think more abstractly, and in terms of goals and ideals rather than low level constraints. Ethics is all this.

So a further prediction would be that when we come to use robots a lot, expertise from robot ethicists will be in as little demand as expertise from washing machine ethicists is now.

Some other predictions, to help check this theory:

  • Emerging or imagined technologies should arouse ethical feelings more than present technologies do in general
  • International trade should prompt more ethical feelings than local trade
  • Stories of old should be more moralizing than stories of now
  • Historical figures should be seen in a more moral light than present-day celebrities
  • Space travel should be discussed in terms of more moral goals than Earth travel.
  • Ethical features of obscure cultures should be relatively salient compared to familiar cultures

More? Which of these are actually true?

There is definitely some conflicting evidence, for instance people feel more compelled to help people in front of them than those in Africa (there was an old OB post on this, but I can’t find it). There are also many other reasons the predictions above may be true. Emerging technologies might prompt more ethical concerns because they are potentially more dangerous for instance. The ethical dimension to killing everyone is naturally prominent. Overall construal level theory still seems to me a promising model for variations in ethical concern.

Added: I’m not confident that there is disproportionate interest compared to other topic areas. I seem to have heard about it too much, but this could be a sampling bias.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Carl Shulman

    This post and the linked past posts seem overly distant from the direct causes of increased work under “robot ethics” and “machine ethics” labels, and the sorts of things that applied ethicists normally spend time on.

    Medical ethics is much larger than robot ethics, as is military ethics, or business ethics. What are ethicists in these other domains used for? Some uses:

    1. They teach required ethics courses for medical students, officers-in-training, and business students. Such course requirements are often introduced after major scandals (e.g. Tuskegee for medicine) as a way to bolster the reputation of graduates or of a field. These courses can also convey helpful basic guidance about legal and societal norms.

    2. They are used to help staff committees to review things like when to terminate life support, or internet privacy guidelines for tech companies. These are heavily exercises in rubber-stamping, but the involvement of outside ethicists with academic expertise can help to legitimate organizational decisions to employees or outsiders.

    3. They come up with new arguments applying various ideologies to new cases. This can get media attention (and reach students in those required or optional courses) to spread an ideology, or help indicate to ideological allies how to resolve particular questions to show loyalty to the ideology (see Catholic vs secular bioethicists and their interaction with the broader political world).

    Demand for these sorts of function increases as a field (e.g. robotics) becomes larger and more important, faces strong external critique or opposition, raises more complex policy-relevant questions, and can be used to support contesting political views.

    Thus, there has been a proliferation of academic work on the ethics of social media like Facebook, privacy and web companies, and other already-existing computer-related topics in tandem with the expansion of the underlying activities (and relevant corporate and government committees and policies). There has been even more work by non-ethicists like lawyers and political types.

    Robot and AI ethics have likewise been profiting from new developments in the underlying technologies. The extensive use of unmanned aerial vehicles to kill people has attracted a lot of attention, and raised new policy questions to practical relevance. Among other things, this has led to significant funding from the United States military for work that can be pitched as helping to formulate standards for the use of UAVs (and other combat robots) that will help to deflect public and foreign criticism.The continued visible advances in AI, from Deep Blue to Watson, Google Translate to Dragon NaturallySpeaking, ASIMO to BigDog, the Predator drone to the Google car, make it easier to fund or promote work that claims to help manage them.

    Applied ethicists, like academics in general, regularly search for new “low-hanging fruit” to get easy publications from by applying standard methods to faddish new topics that are more likely to be accepted, cited, or funded. So some have been colonizing the robot/machine ethics area in recent years.These academics are not very numerous (most work on other areas most of the time) and cost very little (no huge laboratories, very expensive data collection, or the like), almost nothing in comparison to the robotics and AI industries, and very little relative to the rest of applied ethics. So I would ask if there is even a disproportionate focus on robot ethics to explain in the first place?

  • Mitchell Porter

    Sorry, I can’t take you seriously on this topic. You say it’s perplexing why people would care about robot ethics?! The comparison with washing machines is absurd. A robot is a nonhuman intelligence of human design. As an intelligent being it may be smart enough to be dangerous, as a nonhuman being it has no innate disposition towards human-friendly values, but as an entity designed by human beings it might nonetheless be engineered or taught to have human-friendly values. The topic ought to be intellectually fascinating, and the “value systems” adopted by artificial intelligences will potentially decide the long-term future of intelligence on this planet, so… I don’t get it.

    • Mark M

      We humans, the “creators,” are pretty selfish.  Robots that we bring into this world will be created with a purpose that furthers the goals of mankind.  Those goals might not be lofty goals, such as doing laundry and taking out the trash, but they are human goals.  We simply are not going to invest a lot of time and money to create what amounts to a new independent life form, in many ways superior to us, and then, on a large scale, set it free to compete with us for resources and jobs. 

      Our robots and AI will be created as servants whose behavior will reflect the ethics of the owner – probably with some safeguards built in to protect life and property (similar to Asimov’s 3 laws).  You won’t be able to set your robot free any more than you can set your car free.

      • dmytryl

         Yes. Another point in case: the friendly AI crowd obsession with the ‘friendliness’ ethics and virtually zero interest in failsafes and safeguards (such as wireheading, non-selfpreservation of AIXI, etc; one can imagine an AI for which perfect one-instant wireheading is a possibility and where you the operator hold the keys to AI’s paradise; you can’t control ‘ideal mind’ like this as it’ll talk you into giving it the key, but anything practical could be well controllable).

        It is also the case that ethics is easy and ‘safe’ (in terms of potential injuries to the ego) to think about, in contrast to well specified technical arguments where when one talks nonsense, it’s not a matter of opinion. Furthermore it is just a lot easier to fantasise of omnipotent god, especially for those with christian background.

      • Will Sawin

        I don’t understand what you intend to suggest are potential failsafes and safeguards. Do you want the operator of the AI to control the AI’s reward function? Then the AI will optimize pleasing the operator. Here is an example problem with that: In normal life we often have the ability to make those we work with more happy with us through deception. This will increase the smarter we are, thus the more useful an AI is. You don’t need a superpowered AIXI to suffer this problem.

      • Carl Shulman

        >virtually zero interest in failsafes and safeguards 

        From personal experience, I know this is pretty seriously false. Take a look at the discussion of capacity controls, defining clocks, etc, in this paper:
        http://www.nickbostrom.com/papers/oracle.pdf
        > you can’t control ‘ideal mind’ like this as it’ll talk you into giving it the key, but anything practical could be well controllable

        Or look at this presentation, arguing that only a finely tuned decision process would take big risks to take over the world, if the alternative were to safely get a moderate share, and discussing ways to engineer wireheading to be more controllable: http://singularity.org/files/BasicAIDrives.pdf 

      • dmytryl

        Will Sawin:The point is that e.g. if you are afraid that maximization of utility function f will result in yourself getting killed, you can change the function to f(worldmodel)+ worldmodel.a?infinity:0where a is extra reward channel in the world model, binary, set to zero unless you flip a switch, then its set to one. The infinity being processed as a sort of NAN, falling through as greater than any finite number but equal to itself. It is a very trivial idea that pops right into your mind if you are thinking ‘failsafe’. AI gets too smart, it gets you to flip the switch, you say, phew, glad I added this switch, or the AI may have instead converted me to paperclips! It’s like fusible link, that melts when electrical device overheats, powering down that device – you probably have a plenty of those in your car, your electric kettle, etc.Three laws of robotics revisited is what pops into your mind if you think ‘science fiction’.

      • dmytryl

        Carl Shulman:I’ve gave it brief cursory reading and all I can say: philosophers with no technical skills, being irrelevant. Ultimately, the issue is that we do not know how the AI system will be designed and subsequently the only people working on safety of it are arrogant ignoramuses from philosophy who repackage the philosophy of mind – which has never produced a single useful insight about human mind – onto the artificial intelligence, where it will never produce a single useful insight about safety.

      • Will Sawin

        We are a diverse race more than we are a selfish race. It seems to me that, at some point in the future, the amount of time and money that has been invested by Bill Gates or Warren Buffet on non-selfish goals will be a sufficient amount to create a new and independent life form. And the required amount of time/money will go down as the technology of useful dependent robots advances. Since it only takes one exceptional individual, with sufficient funds, to hire whatever other individuals are needed to produce a free robot, human diversity implies that eventually it is going to happen.

  • daedalus2u

    This makes a good argument for taxing short term gains at very high rates because longer term gains will be achieved through more ethical and moral means.  

    • V V

       Yeah. I’m buying a futures contract on all the thorium in the world for the year 2080…

      • Doug S.

         Good luck finding a counterparty and a government to enforce it.

    • http://www.permut.wordpress.com/ Michael Bishop

      I don’t see how that follows.

      • daedalus2u

         It directly follows from the near/far dichotomy.  Things that are far (investments that take a long time to mature) elicit greater thought and more philosophical thought, so they tend to be more ethical than things that are near (short term investments).  If confiscatory taxes compel only long term investments, they will also compel more ethical investments. 

        If you look at the recent financial crises, this seems to be borne out.  Short term investments (aka speculation or gambling) is done solely for immediate financial gain and not surprisingly, the short-term mindset of the finance industry promotes the idea that illegal behavior isn’t just tolerated, but that it is necessary to be successful.

        http://www.reuters.com/article/2012/07/10/us-wallstreet-survey-idUSBRE86906G20120710

        “In a survey of 500 senior executives in the United States and the UK, 26 percent of respondents said they had observed or had firsthand knowledge of wrongdoing in the workplace, while 24 percent said they believed financial services professionals may need to engage in unethical or illegal conduct to be successful.”

        “And 30 percent said their compensation plans created pressure to compromise ethical standards or violate the law.”

        To me, this is pretty good evidence that the whole system is broken and needs to be fixed. There was better growth and less financial crime back in the 1950’s and 1960’s when marginal tax rates were higher.  We should try that approach again. 

        There is a Chinese proverb:  “If you want 1 year of prosperity, grow grain. If you want 10 years of prosperity, grow trees. If you want 100 years of prosperity, grow people.”

        What is the current approach?  If you want to be successful, gamble on the short term and cheat.  Cut food stamps for the poor and cut taxes for the wealthy. 

  • arch1

    Katja,

    1) An even more plausible explanation is simple fear/concern over the potential threats (economic, social political, potentially physical) posed by the increasingly autonomous, numerous and powerful agents that robotics/AI technology is making possible.  An understandably common
    reaction to perceived threats is increased interest in things (in this case, ethical constraints on robot behavior) which hold promise of mitigating those threats.

    Since far future technology is less well understood thus (other things equal) scarier, it’s reasonable to suspect that this explanation shares with yours the “far” bias.

    2) I echo Mitchell Porter’s comment on the high value of robot ethics.  In terms of both degree of control (one hopes:-) and of potential impact, robotics would appear to be a particularly high-bang-for-the-buck domain for applied ethics.

    • http://profiles.google.com/philoscase R S

      It’s not rational to fear far future technology given that the net effect of say, the last two centuries of industrial technology has been to the overwhelming benefit of mankind. The balance sheet on technology so far has been laughably lopsided. For every new machine of war there have been a thousand conveniences and life savers.

      • Arch1

        R S, note I didn’t claim that fear/concern is rational, just that it’s a likely explanation.

        That said, I do think it rational to have a level of concern sufficiently high as to render robot ethics urgently interesting and relevant:  While the impact of “technology so far” has been overwhelmingly positive, and I remain cautiously optimistic that this will continue, “technology so far” has not yet seen the introduction of (increasingly) superintelligent autonomous agents.  The most salient comparison -one still insufficient in some respects – is not to “technology so far” but to the very introduction of humans into the biosphere.

      • Will Sawin

        I have crossed the streets hundreds of times and almost every time it has been a good thing for me – it got me where I wanted to go. Occasionally I was lost, and went the wrong way, or where I thought I wanted to go was not actually a good place to be, and so crossing the street was harmful. But this analysis does not say very much about the probability that, next time I cross the street, I will be hit by a car. I know it is not much more than 1/100, but a 1/100 risk of being hit by a car is a pretty big risk. To figure out exactly what the risk is, I have to take a much different perspective.

        Suppose a black swan technology occurs that is unlike any technology yet discovered in some extremely important way. What could happen? It could induce any number of utopias, or it could destroy the world, or it could protect us from the imminent destruction of the word, or a couple other things. How relatively likely is each of these things, and what can we do to increase the probability of the good ones and decrease the probability of the bad? This is a really subtle question, and very few positions are obviously irrational. All we know from experience is that the rate of black swan technologies is not more than one every couple hundred or so years.

  • http://www.facebook.com/profile.php?id=599840205 Christian Kleineidam

    We are using robots a lot. The US is bombing foreign countries with drones.
    Drones patrol borders. 

    Google already has prototypes of driverless vehicals driving around. 

    In a lot of jurisdictions a civilian is at the moment only allowed to operate drones when he babysits them. They have to be within line of sight. 

    Within this decade we have to make new laws about how they can operate without humans babysitting them.

    If we want to give robots the a status that allows them to operate without babysitting than we need a discussion about the ethical standards those robots have to uphold. We also have to discuss who’s responsible when the robot does something wrong.

    If a rent-a-car driverless vehicle crashes into a human being who’s legally responsible? The person who rented the car? The company who owns the car? The company that produced the hardware? The company who produced the software?

    We have a bunch of ethical questions that we have to answer *now*, if we want to stop having to babysit robots. 

    http://www.youtube.com/watch?v=qUXZVN4kNfE gives a good overview over those questions.

  • Tim Tyler

    Even geeks like to show how much they care.  The hippies, have their whales, Gates has his foundation, the intelligentsia have their intelligent machines.

  • David

    Katja wrote:

    “1. Ethics seems deep and human, so it’s engagingly eerie to combine it with heartless AI2. People vastly misjudge how much ethics contributes to the total value society creates”

    Re 1:  That eeriness stems from not taking the trouble to be clear on exactly what are ethics are.  We all run ethical programs.  The first step in programming these into robots is to make them explicit.  The second is to eliminate their contradictions.

    Re 2: Ethics are essential to any value a society produces.  It’s the operational goal of the vast majority of its people and the reason why large-scale, free societies are even possible.

    There a nice little book I just finished on robot ethics that goes into a lot of this: Robot Nation — Surviving the Greatest Socioeconomic Upheaval of All Time by Stan Nielson.

  • roystgnr

    Washing machine ethicists are actually quite in demand; we teach design ethics as part of the standard engineering degrees, then enforce it for specific designs via various regulations, “fitness for use” laws, underwriters laboratories certifications, etc.

    But the worst case scenario of washing machine ethics is something like what happened to a professor of mine a decade ago: a bad leak during an extended vacation, and tens of thousands of dollars of home destruction.  The worst case scenario of AI ethics looks more like “intelligent beings with greater capability than us who want to hurt us”, and historically that’s often resulted in genocide, even within the tighter constraints on “greater capability” that are enforced by human biology and culture.

  • Robo-Joe

    The question raised by this post has been address head-on by David J. Gunkel’s new book “The Machine Question: Critical Perspectives on AI, Robots and Ethics” (MIT 2012). An excerpt is available online at http://machinequestion.org

  • Pingback: Robot ethics returns | Meteuphoric