Talking with a professor of robotics, I noticed a nice approachable question at the intersection of social science, computer science, and futurism. Someday robots will mix with humans in public, walking our streets, parks, hospitals, and stores, driving our streets, swimming our waterways, and perhaps flying our skies. Such public robots may vary enormously in their mental and physical capacities, but if they are to mix smoothly with humans in public they then we will probably expect them to maintain a minimal set of common social capacities. Such as responding sensibly to “Who are you?” and “Get out of my way.” And the rest of us would have a new modified set of social norms for dealing with public robots via these capacities.
I think that consideration of robots' rights is an important element of any such protocol design.
The most interesting discussion I've seen on the challenges in this space is from Rodney Brooks on self-driving cars: http://rodneybrooks.com/une...
Rodney's take is that navigating social interactions with other humans is a significant hurdle for autonomous self-driving vehicles, and one of the major open challenges to solve before mass adoption is possible. Challenges include both humans getting angry at "rude" robots that can't pick up on social context or cues as well as humans taking advantage of "polite" robots such as passive drivers.
In a sense robots.txt is a first demonstration of this, although just for webcrawler robots. It just tells them "Robot X is not allowed in these places".
I've been thinking about similar question several months since I talk with my former tutor (visiting prof) leaded this project. https://www.youtube.com/wat... I failed to explain the DAO at that time to him. I should have said that datas on public blockchain are alive, and can be emotional. The protocol is certainly the intersection.
Here's a separate thread for suggesting additions to Robin's list. Here's my first contribution:
"How can I get in touch with your owner?"
It might also be useful to have a standardized signal indicating that the robot follows some social protocol. We normally assume that anyone who looks like a person is fair game for these kinds of questions. I don't know whether the best rule is something obvious like "don't give it a face unless it can respond to questions", or whether some designed insignia on the "shoulder" or "chest" (presuming there's a physiological region like that) would be better. This seems like a reasonable area for thinking about design.
Sure, but it can still help to think about problems ahead of time.
Premature standardization can lock in bad decisions.
Better to wait on standardization until interoperability becomes a practical problem.
> the robot protocol can’t be matched in much detail to the specific local costs of implementing various robot capacities. Instead, it could at best be matched to broad overall trends in such costs. To allow robots to walk among us, we’d try to be forgiving and only expect robots to have capacities that we especially value, and that are relatively cheap to implement in a wide range of contexts.
I predict lots of matching to local details and little forgiveness in expecting robots to behave sensibly over many sorts of interactions. But these complications will be incorporated into the sophistication hierarchy rather than being part of a common knowledge that all laymen are expected to know.
I didn't say it had to be simple.
I don't think the Robot protocol has to be simple so much as discoverable and adaptable. For all of their faults and abuses, Western legal systems actually have many (often cleverly designed) features that allow them to interact with unsophisticated users. For instance, judges are instructed to be forgiving of honest procedural errors, advisors are often provided free of charge, ignorance often *is* an excuse (or mitigating circumstance), most laws are designed to align with folk moral intuition even when not maximally efficient, etc. Anyone who doubts that things could be much worse in this aspect should try interacting with a legal system that does *not* have to deal with unsophisticated users, e.g., wikipedia "law" and pharmaceutical regulation. Indeed, there is often a gradient of sophistication, where laws are allowed to be more complex and unforgiving when there are assurances that it is screened off from unsophisticated users, e.g., buying radio spectrum requires more paperwork than starting an LLC, which in turn is harder than getting birth certificate for a newborn baby.
We should expect similar layers of complexity with robots, who will have robust simple rules for avoiding physical injury to children and the mentally challenges (who have more difficulty articulating their desires), but progressively more complicated rules for dealing with vehicle drivers, private security, and intelligence agencies.