The Bad Guy Bias

Shankar Vedantam:

Nations tend to focus far more time, money and attention on tragedies caused by human actions than on the tragedies that cause the greatest amount of human suffering or take the greatest toll in terms of lives. … In recent years, a large number of psychological experiments have found that when confronted by tragedy, people fall back on certain mental rules of thumb, or heuristics, to guide their moral reasoning. When a tragedy occurs, we instantly ask who or what caused it. When we find a human hand behind the tragedy — such as terrorists, in the case of the Mumbai attacks — something clicks in our minds that makes the tragedy seem worse than if it had been caused by an act of nature, disease or even human apathy. …

Tragedies, in other words, cause individuals and nations to behave a little like the detectives who populate television murder mystery shows: We spend nearly all our time on the victims of killers and rapists and very little on the victims of car accidents and smoking-related lung cancer. "We think harms of actions are much worse than harms of omission," said Jonathan Baron, a psychologist at the University of Pennsylvania. "We want to punish those who act and cause harm much more than those who do nothing and cause harm. We have more sympathy for the victims of acts rather than the victims of omission. If you ask how much should victims be compensated, [we feel] victims harmed through actions deserve higher compensation."

This bias should also afflict our future thinking, making us worry more about evil alien intent than unintentional catastrophe. 

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/aroneus/ Aron

    Quite possibly a good bias to have.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Indeed, I’ve found that people repeatedly ask me about AI projects with ill intentions – Islamic terrorists building an AI – rather than trying to grasp the ways that well-intentioned AI projects go wrong by default.

  • luzr

    Eliezer:

    The point of post is that real AI has intentions of its own, I believe.

    I am much more worried about artificial nano-replicator killing all humans because of some design bug or mutation, without any ill intentions involved, rather than superintelligent AI using nano-replicator with ill intentions to kill all humans.

    In fact, I hope that we will develop strong AI before nano-replicators – maybe only true strong AI can prevent this kind of bugs.

  • Frelkins

    On consideration, I would like to suggest that this bias arises out of an overlap of 3 tendencies. As they clash, this is the seeming reasonable ground.

    1 – language issues. We call plagues, earthquakes etc “acts of God.” This gives the impression that nothing can be done about them. Thus it is hard slow work to pass better building codes, enact public health measures and such.The attribution to god appears to make any planning action we could take pointless and even may subtly hint that the victims deserved it through “sin.”

    2- control bias. People have too much confidence in their own control. So we are not concerned so with smoking & auto deaths because the victim appeared to be in control of the smoking or driving behavior. For a long time despite the health knowledge we had, we were reluctant to acknowledge that nicotine is actually addictive, and rationally act on that even though the cost to the public purse was high. Anti-smoking efforts began more effective only when the issue became “second hand smoke” – that is, something another person did to you. Once harm was removed from your sphere of control, then society moved rapidly. We will suffer much to preserve this illusion of control.

    3 – tribal justice. When “outsiders” – serial killers, terrorists – attack us, they deprive us of tribe members. We need tribe members to promote the survival of our lineage either as genetic relatives or allies to support our relatives. We need to prove to our allies that their loyalty is rewarded so we hunt down the outsiders and are angry for what they have cost us. In the past our allies were also more probably related to us, so they had many of our genes (& memes). If we don’t play detective to find the bad guys our allies and genetic relatives will not trust us.

  • http://reflectivedisequilibria.blogspot.com/ Carl Shulman

    “Islamic terrorists building an AI”
    Right, because there are so many brilliant Islamic terrorists who split their time between Al Qaeda and advancing cognitive science that there is a non-negligible chance that they will do so. What was the context?

  • michael vassar

    Frelkins: I think that the “acts of God” terminology is not in common use among many populations and in my observations these populations still show bad guy bias, even to the point of explicit beliefs that it’s somehow worse to be killed than to die in other similarly violent ways. 3 begs the question “Why don’t allies distrust us if we are indifferent to their non malevolently caused plight?”. 2 is definitely valid, but is it enough by itself?

    All esp Luzr: In the >H community concern with accidental damage, e.g. from grey goo, remained greater than concern over malevolent damage, e.g. from nanotech warfare, until careful fairly independent analysis from multiple trusted sources largely dismissed the former as a problem relative to the latter. Also, while uploads and maybe ems have something that humans would recognize as intent, neither designed AGI nor evolution do, and those seem to be the major dangers perceived here.

  • frelkins

    @Carl

    What was the context?

    When I was in India this summer, my visit co-incided with a spate of terrorist attacks in Bangalore, Assam, Gujurat, and other areas. In Bangalore – the tech center – a radical Muslim student group called SIMI claimed to be involved. They include elite engineering students, and much consideration was given to the idea that soon they would move from crude flowerpot bombs at bus shelters to sophisticated computer attacks on government and financial sites.

    That groups like these are funded and given support from state actors outside India cannot be denied. Surely no one doubts the scientific capabilities of jihadists insofar as they are coming from the best schools in India and abroad. If all it would take to build even a crude low-level dangerous AI is a lot of money and a cadre of the dedicated, certainly Middle Eastern state actors have the oil wealth and can recruit from well-educated radicals in India, perhaps even the UK and Germany.

    I think it is rational to list it as a catastrophic risk altho I would leave it to folks like Robin to prioritize it among the others.

  • luzr

    “Also, while uploads and maybe ems have something that humans would recognize as intent, neither designed AGI nor evolution do.”

    Maybe it is just my poor English, but do you really mean that AGI does not have intent?

    Do you suggest that intent is only reserved for humans?

  • luzr

    frelkins:

    Ah, sorry, finaly got the point: You mean that most human would not say that AGI has intent, not that AGI does not have one, correct?

    OTOH, I believe that while this is true for general population, many of the subgroup frequent here are ready to consider that AGI has intent… (IMHO).

  • conchis

    At a more mundane level than global catastrophe, there’s apparently some evidence that people experience more pain if they perceive that the pain is intentionally inflicted.

    The study seems to be based on self-reports, so it could be either a partial justification of the bad guy bias in some circumstances, or possibly just a result of it. Hard to tell without more “objective” measures…

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Carl, I just seriously get asked that question all the time, and not just by reporters. “Islamic terrorists” are simply the default bad guys of our time.

  • Douglas Knight

    I am surprised that the “not just reporters” need to name the bad guys.

  • http://zbooks.blogspot.com Zubon

    Eliezer, just so long as the communists don’t build it, nor the next default bad guys that come after the terrorists. Or the nazis and whoever the default bad guys were before them.

    Although you must admit, if they do build an unfriendly, superintelligent, recursively improving AI with nano-replicators, then the terrorists will have already won. Ha, beat that, smart guy! (I am imaging Eliezer on a cable news network interview, on the split screen, as the person on the other side of the split makes that point with smug condescension.) (I think it works for some values of “unfriendly” and “won.”)

  • Constant

    Nations tend to focus far more time, money and attention on tragedies caused by human actions than on the tragedies that cause the greatest amount of human suffering or take the greatest toll in terms of lives.

    There are dissimilarites which may justify this choice of focus as correct. The term “nation” might refer to government. While it may be popular nowadays for government to help people in case of natural catastrophe, the “regular job” of the government is often considered to be to enforce the law – i.e., go after bad guys, not go after earthquakes.

    something clicks in our minds that makes the tragedy seem worse than if it had been caused by an act of nature, disease or even human apathy.

    This is hard to measure. “Seem worse”? The way to measure this is surely by looking at reaction. And a dissimilarity in the cause may justify a dissimilarity in the reaction, even if the harm considered in isolation from the cause is equal.

    We want to punish those who act and cause harm much more than those who do nothing and cause harm.

    First, we should rationally punish those who are morally culpable, not merely those whose actions contributed causally to the harm. After all, it takes two to cause a head-on collision, but often only one of the participants is morally culpable (e.g. the person whose car was in the wrong lane). There is an important distinction to be made between those who are culpable and those who are not. And generally, those whose inaction “causes” a harm (in the sense that the person, by failing to act, fails to prevent it) are not culpable.

    It would be hasty to call existing moral distinctions between those who are and are not culpable “biased”. There are generally good reasons for these distinctions.

  • Constant

    If you ask how much should victims be compensated, [we feel] victims harmed through actions deserve higher compensation.

    Probably they are the only ones who deserve any compensation at all. For a person to deserve compensation, another person must be obligated to give them compensation. This other person is the person morally responsible for the harm. In the case of a natural disaster there is likely no person morally responsible for the harm, therefore no person obligated to give compensation to the victim. Therefore the victim is not entitled to compensation from any particular person. Therefore the victim is not entitled to compensation, period.

  • http://hanson.gmu.edu Robin Hanson

    Yes the Friendly AI concern is not about ill intended creators, but it is about ill intended creations.

  • http://profile.typekey.com/halfinney/ Hal Finney

    Right, I think the point at the end is that this bias makes us too concerned about unfriendly AI, because it fits into this “evil actor” model. One might also point to the many criticisms raised here over Robin’s “ems” scenario along these lines, that the ems would be evil (or at least selfish and/or uncaring) and do things that would wipe out the human race or worse. Both of these concerns direct us to focus on the motivations of intelligent and powerful beings as a principal threat to our happiness. This framework is much the same as how people today view the threat of terrorism.

    Even more naturalistic threats, like global warming or resource exhaustion, tend to be interpreted in a model based on “bad guy” actions. It’s nobody’s fault, really, that the world may be running out of oil, or capacity to absorb CO2 – it’s just an unfortunate fact of nature. Kind of an odd coincidence that it is happening just as a demographic transition is reducing population growth rates, as well as when technology is almost there for cheap work-arounds to these problems. If the world had had one order of magnitude more capacity for these resources then we’d probably have a much smoother time transitioning. But the issues are often framed such that we look for someone to blame, we see the problem as a result of bad behavior.

  • michael vassar

    Robin: Only in the sense that evolution or economic attractors are ill intentions.
    It’s VERY hard to avoid anthropomorphizing AGI unless you keep those in mind as part of your default sample set of “optimizers”.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I could make the same point with respect to unFriendly AI; the ones you have to worry about aren’t “evil” in the sense of carrying out deliberately malevolent deeds, but rather, the ones who don’t care about your existence one way or another (and you are made of atoms that they could use for things they do care about).

    Minds are more powerful and have a larger impact than just about anything else; smarter minds have larger impacts. People concerned with large impacts on the future have a natural cause to be concerned about extremely intelligent actors. And for obvious reasons, we shouldn’t be worrying about those whose desires approach our own reflective equilibria, but rather those which are neutral, twisted-good, or (in the case of augmented uploads) evil.

  • http://www.nancybuttons.com Nancy Lebovitz

    I think war gets treated as more like a natural disaster than the result of ill intent. The number of Jews killed in the Holocaust is quite well known. The number of non-Jews killed in the Holocaust is fairly well known. The number of people killed by Hitler’s invasions is generally not mentioned.

  • Jayson Virissimo

    “these populations still show bad guy bias, even to the point of explicit beliefs that it’s somehow worse to be killed than to die in other similarly violent ways” -michael vassar

    Michael, are you saying you shouldn’t be more upset if I kicked you in the shin than if you just bumped your shin on a railing?

  • http://profile.typekey.com/halfinney/ Hal Finney

    That’s a good question, Nancy; I was wondering about the same thing. It is certainly possible that warfare will turn out to be the overwhelmingly biggest disaster facing humanity in the 21st century, and that perhaps time spent aiming to avoid disasters involving super-intelligence might be better spent working to improve institutions to prevent war. Of course, catastrophic future wars, armageddon, post-apocalyptic survival, etc, are among the oldest tropes in speculative fiction. So I’m not sure that the bias here applies, that we under-estimate the impact of war.

    Eliezer, minds do have a disproportionately large impact, but at the same time, historically humanity has arguably been more harmed by non-minds than minds. This bias makes us focus too hard on the actions of minds.

    I am trying to think of what future problems we might be ignoring because of this bad-guy bias. I suppose it comes down to the kinds of things Vedantam mentions, natural disasters and accidents. Maybe the real key to happiness and prosperity in the future is encouraging more people to buy insurance. That will not only protect them against certain harms, it will send price signals to avoid risky behavior.

  • http://www.nancybuttons.com Nancy Lebovitz

    Aging is probably the biggest non-badguy damage happening in the civilized world.

  • luzr

    Hal:

    “perhaps time spent aiming to avoid disasters involving super-intelligence might be better spent working to improve institutions to prevent war”

    I would go even further saying that perhaps Strong AGI is the institution to prevent the war and other disasters.

    There are so many risks to human civilization and most of them can be prevented in posthumanity environment…

    Nancy:

    “Aging is probably the biggest non-badguy damage happening in the civilized world.”

    Yep. Another case for moving things ahead.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    subsidiary to repugnancy bias and commission bias (bad things committed by an entity with apparent agency is more repugnant).

  • John David Galt

    This bias is easily explained: we expect (in the sense of “demand”) people to treat each other decently. One can’t be sapient without having this duty. Thus, people close to the victim of a killer or rapist quite reasonably believe there’s no excuse for the crime to have occurred — a specific person is responsible and needs to be punished and/or made to pay restitution, so feeling outrage toward him/her is the only appropriate response.

    In contrast, car wrecks (excluding those caused deliberately or by gross dereliction) are sufficiently unforseeable (and the precautions that might have prevented them are sufficiently non-worthwhile) that there is nobody to blame: such things inevitably happen to a few. The same reasoning applies to lung cancer, if we credit the victim’s own decision that smoking is enjoyable enough to be worth the risk to him/her. (If we don’t, then it is self-inflicted.) In either case there is no reasonable target for outrage, except maybe “God”.

  • Pingback: AI Foom Debate: Post 41 – 45 | wallowinmaya