The Robot Protocol

Talking with a professor of robotics, I noticed a nice approachable question at the intersection of social science, computer science, and futurism.

Someday robots will mix with humans in public, walking our streets, parks, hospitals, and stores, driving our streets, swimming our waterways, and perhaps flying our skies. Such public robots may vary enormously in their mental and physical capacities, but if they are to mix smoothly with humans in public they then we will probably expect them to maintain a minimal set of common social capacities. Such as responding sensibly to “Who are you?” and “Get out of my way.” And the rest of us would have a new modified set of social norms for dealing with public robots via these capacities.

Together these common robot capacities and matching human social norms would become a “robot protocol.” Once ordinary people and robots makers have adapted to it, this protocol would be a standard persisting across space and time, and relatively hard to change. A standard that diverse robots could also use when interacting with each other in public.

Because it would be a wide and persistent standard, the robot protocol can’t be matched in much detail to the specific local costs of implementing various robot capacities. Instead, it could at best be matched to broad overall trends in such costs. To allow robots to walk among us, we’d try to be forgiving and only expect robots to have capacities that we especially value, and that are relatively cheap to implement in a wide range of contexts.

(Of course this general robot protocol isn’t the only thing that would coordinate robot and human interactions. There’d also be many other more context-dependent protocols.)

One simple option would be to expect each public robot to be “tethered” via fast robust communication to a person on call who can rapidly respond to all queries that the robot can’t handle itself. But it isn’t clear how sufficient this approach will be for many possible queries.

Robots would probably be expected to find and comply with any publicly posted rules for interacting in particular spaces, such as the rules we often post for humans on signs. Perhaps we will simplify such rules for robots. In addition, here are some things that people sometimes say to each other in public where we might perhaps want robots to have analogous capacities:

Who are you? What are you doing here? Why are you following me? Please don’t record me. I’m serving you with this legal warrant. Stop, this is the police! You are not allowed to be here; leave. Non-authorized personnel must evacuate this area immediately. Get out of my way. You are hurting me. Why are you calling attention to me? Can you help me? Can you take our picture? Where is the nearest bathroom? Where is a nearby recharging station? (I may add more here.)

It seems feasible to start now to think about the design of such a robot protocol. Of course in the end a robot protocol might be just a social convention without the force of law, and it may result more from decentralized evolution than centralized design. Even so, we may now know enough about human social preferences and the broad outlines of the costs of robot capacities to start to usefully think about this problem.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://www.jessriedel.com Jess Riedel

    I don’t think the Robot protocol has to be simple so much as discoverable and adaptable. For all of their faults and abuses, Western legal systems actually have many (often cleverly designed) features that allow them to interact with unsophisticated users. For instance, judges are instructed to be forgiving of honest procedural errors, advisors are often provided free of charge, ignorance often *is* an excuse (or mitigating circumstance), most laws are designed to align with folk moral intuition even when not maximally efficient, etc. Anyone who doubts that things could be much worse in this aspect should try interacting with a legal system that does *not* have to deal with unsophisticated users, e.g., wikipedia “law” and pharmaceutical regulation. Indeed, there is often a gradient of sophistication, where laws are allowed to be more complex and unforgiving when there are assurances that it is screened off from unsophisticated users, e.g., buying radio spectrum requires more paperwork than starting an LLC, which in turn is harder than getting birth certificate for a newborn baby.

    We should expect similar layers of complexity with robots, who will have robust simple rules for avoiding physical injury to children and the mentally challenges (who have more difficulty articulating their desires), but progressively more complicated rules for dealing with vehicle drivers, private security, and intelligence agencies.

    • http://overcomingbias.com RobinHanson

      I didn’t say it had to be simple.

      • http://www.jessriedel.com Jess Riedel

        > the robot protocol can’t be matched in much detail to the specific local costs of implementing various robot capacities. Instead, it could at best be matched to broad overall trends in such costs. To allow robots to walk among us, we’d try to be forgiving and only expect robots to have capacities that we especially value, and that are relatively cheap to implement in a wide range of contexts.

        I predict lots of matching to local details and little forgiveness in expecting robots to behave sensibly over many sorts of interactions. But these complications will be incorporated into the sophistication hierarchy rather than being part of a common knowledge that all laymen are expected to know.

  • Dave Lindbergh

    Premature standardization can lock in bad decisions.

    Better to wait on standardization until interoperability becomes a practical problem.

    • http://overcomingbias.com RobinHanson

      Sure, but it can still help to think about problems ahead of time.

  • Chris Hibbert

    It might also be useful to have a standardized signal indicating that the robot follows some social protocol. We normally assume that anyone who looks like a person is fair game for these kinds of questions. I don’t know whether the best rule is something obvious like “don’t give it a face unless it can respond to questions”, or whether some designed insignia on the “shoulder” or “chest” (presuming there’s a physiological region like that) would be better. This seems like a reasonable area for thinking about design.

  • Chris Hibbert

    Here’s a separate thread for suggesting additions to Robin’s list. Here’s my first contribution:

    “How can I get in touch with your owner?”

  • http://www.litmas.me Kazunori Seki

    I’ve been thinking about similar question several months since I talk with my former tutor (visiting prof) leaded this project. https://www.youtube.com/watch?v=ZHMQuo_DsNU
    I failed to explain the DAO at that time to him. I should have said that datas on public blockchain are alive, and can be emotional. The protocol is certainly the intersection.

  • Anders Sandberg

    In a sense robots.txt is a first demonstration of this, although just for webcrawler robots. It just tells them “Robot X is not allowed in these places”.

  • Anand Kumar

    The most interesting discussion I’ve seen on the challenges in this space is from Rodney Brooks on self-driving cars: http://rodneybrooks.com/unexpected-consequences-of-self-driving-cars/

    Rodney’s take is that navigating social interactions with other humans is a significant hurdle for autonomous self-driving vehicles, and one of the major open challenges to solve before mass adoption is possible. Challenges include both humans getting angry at “rude” robots that can’t pick up on social context or cues as well as humans taking advantage of “polite” robots such as passive drivers.

  • arch1

    I think that consideration of robots’ rights is an important element of any such protocol design.