23 Comments

The question raised by this post has been address head-on by David J. Gunkel's new book "The Machine Question: Critical Perspectives on AI, Robots and Ethics" (MIT 2012). An excerpt is available online at http://machinequestion.org

Expand full comment

Washing machine ethicists are actually quite in demand; we teach design ethics as part of the standard engineering degrees, then enforce it for specific designs via various regulations, "fitness for use" laws, underwriters laboratories certifications, etc.

But the worst case scenario of washing machine ethics is something like what happened to a professor of mine a decade ago: a bad leak during an extended vacation, and tens of thousands of dollars of home destruction.  The worst case scenario of AI ethics looks more like "intelligent beings with greater capability than us who want to hurt us", and historically that's often resulted in genocide, even within the tighter constraints on "greater capability" that are enforced by human biology and culture.

Expand full comment

Carl Shulman:I've gave it brief cursory reading and all I can say: philosophers with no technical skills, being irrelevant. Ultimately, the issue is that we do not know how the AI system will be designed and subsequently the only people working on safety of it are arrogant ignoramuses from philosophy who repackage the philosophy of mind - which has never produced a single useful insight about human mind - onto the artificial intelligence, where it will never produce a single useful insight about safety.

Expand full comment

Will Sawin:The point is that e.g. if you are afraid that maximization of utility function f will result in yourself getting killed, you can change the function to g(worldmodel)=f(worldmodel)+ worldmodel.a?infinity:0 where a is extra reward channel in the world model, binary, set to false unless you flip a switch, then its set to one. (and the "a?b:c" returns b if a is true, c otherwise)

The infinity being processed as a sort of NAN, falling through as greater than any finite number but equal to itself. It is a very trivial idea that pops right into your mind if you are thinking 'failsafe'.

AI gets too smart, it gets you to flip the switch, you say, phew, glad I added this switch, or the AI may have instead converted me to paperclips! It's like fusible link, that melts when electrical device overheats, powering down that device - you probably have a plenty of those in your car, your electric kettle, etc.

Three laws of robotics revisited is what pops into your mind if you think 'science fiction' and have zero expertise in safety (or for that matter, any technical expertise), which precisely describes the FAI crowd. The ethics is the only thing they can 'work' on. They yammer on and on how the utility functions need to be safe, but none got a slightest trace of actual technical skill and subsequently none can come up with even the most trivial of the ideas that may improve the safety.

Expand full comment

>virtually zero interest in failsafes and safeguards 

From personal experience, I know this is pretty seriously false. Take a look at the discussion of capacity controls, defining clocks, etc, in this paper:http://www.nickbostrom.com/...> you can't control 'ideal mind' like this as it'll talk you into giving it the key, but anything practical could be well controllable

Or look at this presentation, arguing that only a finely tuned decision process would take big risks to take over the world, if the alternative were to safely get a moderate share, and discussing ways to engineer wireheading to be more controllable: http://singularity.org/file... 

Expand full comment

I have crossed the streets hundreds of times and almost every time it has been a good thing for me - it got me where I wanted to go. Occasionally I was lost, and went the wrong way, or where I thought I wanted to go was not actually a good place to be, and so crossing the street was harmful. But this analysis does not say very much about the probability that, next time I cross the street, I will be hit by a car. I know it is not much more than 1/100, but a 1/100 risk of being hit by a car is a pretty big risk. To figure out exactly what the risk is, I have to take a much different perspective.

Suppose a black swan technology occurs that is unlike any technology yet discovered in some extremely important way. What could happen? It could induce any number of utopias, or it could destroy the world, or it could protect us from the imminent destruction of the word, or a couple other things. How relatively likely is each of these things, and what can we do to increase the probability of the good ones and decrease the probability of the bad? This is a really subtle question, and very few positions are obviously irrational. All we know from experience is that the rate of black swan technologies is not more than one every couple hundred or so years.

Expand full comment

I don't understand what you intend to suggest are potential failsafes and safeguards. Do you want the operator of the AI to control the AI's reward function? Then the AI will optimize pleasing the operator. Here is an example problem with that: In normal life we often have the ability to make those we work with more happy with us through deception. This will increase the smarter we are, thus the more useful an AI is. You don't need a superpowered AIXI to suffer this problem.

Expand full comment

We are a diverse race more than we are a selfish race. It seems to me that, at some point in the future, the amount of time and money that has been invested by Bill Gates or Warren Buffet on non-selfish goals will be a sufficient amount to create a new and independent life form. And the required amount of time/money will go down as the technology of useful dependent robots advances. Since it only takes one exceptional individual, with sufficient funds, to hire whatever other individuals are needed to produce a free robot, human diversity implies that eventually it is going to happen.

Expand full comment

R S, note I didn't claim that fear/concern is rational, just that it's a likely explanation.

That said, I do think it rational to have a level of concern sufficiently high as to render robot ethics urgently interesting and relevant:  While the impact of "technology so far" has been overwhelmingly positive, and I remain cautiously optimistic that this will continue, "technology so far" has not yet seen the introduction of (increasingly) superintelligent autonomous agents.  The most salient comparison -one still insufficient in some respects - is not to "technology so far" but to the very introduction of humans into the biosphere.

Expand full comment

Katja wrote:

"1. Ethics seems deep and human, so it’s engagingly eerie to combine it with heartless AI2. People vastly misjudge how much ethics contributes to the total value society creates"

Re 1:  That eeriness stems from not taking the trouble to be clear on exactly what are ethics are.  We all run ethical programs.  The first step in programming these into robots is to make them explicit.  The second is to eliminate their contradictions.

Re 2: Ethics are essential to any value a society produces.  It's the operational goal of the vast majority of its people and the reason why large-scale, free societies are even possible.

There a nice little book I just finished on robot ethics that goes into a lot of this: Robot Nation -- Surviving the Greatest Socioeconomic Upheaval of All Time by Stan Nielson.

Expand full comment

 Good luck finding a counterparty and a government to enforce it.

Expand full comment

 Yes. Another point in case: the friendly AI crowd obsession with the 'friendliness' ethics and virtually zero interest in failsafes and safeguards (such as wireheading, non-selfpreservation of AIXI, etc; one can imagine an AI for which perfect one-instant wireheading is a possibility and where you the operator hold the keys to AI's paradise; you can't control 'ideal mind' like this as it'll talk you into giving it the key, but anything practical could be well controllable).

It is also the case that ethics is easy and 'safe' (in terms of potential injuries to the ego) to think about, in contrast to well specified technical arguments where when one talks nonsense, it's not a matter of opinion. Furthermore it is just a lot easier to fantasise of omnipotent god, especially for those with christian background.

Expand full comment

Even geeks like to show how much they care.  The hippies, have their whales, Gates has his foundation, the intelligentsia have their intelligent machines.

Expand full comment

We are using robots a lot. The US is bombing foreign countries with drones.Drones patrol borders. 

Google already has prototypes of driverless vehicals driving around. 

In a lot of jurisdictions a civilian is at the moment only allowed to operate drones when he babysits them. They have to be within line of sight. 

Within this decade we have to make new laws about how they can operate without humans babysitting them.

If we want to give robots the a status that allows them to operate without babysitting than we need a discussion about the ethical standards those robots have to uphold. We also have to discuss who's responsible when the robot does something wrong.

If a rent-a-car driverless vehicle crashes into a human being who's legally responsible? The person who rented the car? The company who owns the car? The company that produced the hardware? The company who produced the software?

We have a bunch of ethical questions that we have to answer *now*, if we want to stop having to babysit robots. 

http://www.youtube.com/watc... gives a good overview over those questions.

Expand full comment

 It directly follows from the near/far dichotomy.  Things that are far (investments that take a long time to mature) elicit greater thought and more philosophical thought, so they tend to be more ethical than things that are near (short term investments).  If confiscatory taxes compel only long term investments, they will also compel more ethical investments. 

If you look at the recent financial crises, this seems to be borne out.  Short term investments (aka speculation or gambling) is done solely for immediate financial gain and not surprisingly, the short-term mindset of the finance industry promotes the idea that illegal behavior isn't just tolerated, but that it is necessary to be successful.

http://www.reuters.com/arti...

“In a survey of 500 senior executives in the United States and the UK, 26 percent of respondents said they had observed or had firsthand knowledge of wrongdoing in the workplace, while 24 percent said they believed financial services professionals may need to engage in unethical or illegal conduct to be successful.”

“And 30 percent said their compensation plans created pressure to compromise ethical standards or violate the law.”

To me, this is pretty good evidence that the whole system is broken and needs to be fixed. There was better growth and less financial crime back in the 1950's and 1960's when marginal tax rates were higher.  We should try that approach again. 

There is a Chinese proverb:  “If you want 1 year of prosperity, grow grain. If you want 10 years of prosperity, grow trees. If you want 100 years of prosperity, grow people.”

What is the current approach?  If you want to be successful, gamble on the short term and cheat.  Cut food stamps for the poor and cut taxes for the wealthy. 

Expand full comment

I don't see how that follows.

Expand full comment