Kind Right-Handers

Imagine a society like ours, but with a moral norm against ever using a right hand to hurt anyone.  They kill, rape, torture, and so on, but always with their left hand, never with their right.  They are proud to live in a civilized society, and are disgusted by barbaric societies where right-handed harm is common.  Their disgust sometimes makes them war against barbarians, to civilize them.  But even in war they are careful to show their moral superiority by only killing with their left hands.   Are these people as moral as they believe? 

James Miller’s recent post on replacing prison with torture suggested to me this allegory of kind right-handers.  Most of us are apparently very proud of our moral norm against torture, even though we allow ourselves to impose great harms in other ways.  We tend to be disgusted by Muslim torture practices, encouraging us to go to war against such "barbarians."   But I fear such moral norms do more to help us feel superior than to reduce the total amount of harm.

So who wants to start a crusade against right-handed harm?   

GD Star Rating
Tagged as:
Trackback URL:
  • michael vassar

    The question is all one of whether left-handed harm simply substitutes for the eliminated right-handed harm or not, which seems, in the case of torture, to boil down to the question of whether prison and torture serve as substitutes. Since prison seems to protect me from criminals and torture seems not to, I’d guess not, except possible in the case of white collar offenses, and of course drug offenses, which many here would probably agree are based on barbaric laws. OTOH, I can imagine things that might also protect me from criminals without prison. I wonder if a small basic income paid in corn, beans, and legal opiates would do the trick and let criminals protect society from themselves. Torture might be a complement to that basket by increasing demand for the opiates. The barbarians don’t do anything like this though, so I think it’s fair to designate them with such epithets. In general it does seem that cultures show strong positive correlations among their “industrial flourishing promoting” institutions and norms, roughly with Victorian England on one end of the scale and North Korea on the other. One can generalize from drug laws, passports, etc and suggest that the 20th century was typically a relapse towards barbarism and find a great many historians who would agree with you.

  • I think the rule against torture is less arbitrary than a rule requiring all harm to be done with the left hand. Saying that punishment stops at the prisoner’s skin permits causing a lot of pain (including up to severe torture and death) but it also eliminates a lot of cruelty and crippling.

    I think the Abu Graib pictures showed how strong the taboo against leaving permanent marks is in this culture, and I think it’s a taboo worth maintaining.

  • Michael, I think your missing the larger point of the Robin’s excellent allegorical post, because I think Robin made an unhelpful choice to hitch it specifically to the torture vs. prison argument.

    I suspect that in a fairly large scope, the “right hand” is yucky forms of harm to others (or arguably more narrowly, to perceived hegemonic social cohort healthy persistence odds, or phschpo), and that the “left hand” is non-yucky forms of harm to others (or arguably phschpo).

    Yucky forms of harm to others (phschpo) can include torture and execution by individuals or the state, sexual molestation (particularly by strangers), and can include “natural” threats like shark attacks, plane crashes, and bridge collapses

    Non-yucky forms of harm to others are often a bit more abstract, and are often rooted in economic waste and inefficiency.

    I think this is standard yucky/eww bias -a very important topic, and perhaps a more harmful/wasteful bias to our mutual persistence odds than various religious biases. I think it could use its own Chris Hitchens/Richard Dawkins/Sam Harris to draw stronger public/intellectual attention to the harmful effects of this systemic bias.

  • Michael, what proportion of violent crime would you say is caused by extreme poverty?

  • Michael, torture plus collars could protect you like prison does.

    Nancy, why would stopping at the skin reduce total harm more than stopping at the left hand?

  • michael vassar

    Hopefully: I think you are talking about the basic omission/commission distinction.

  • Michael, no, I think the broader and more salient distinction is eww/yuck bias. I think it captures ommission/commission (commission is more yucky) but it also captures the difference in penalties for a wide variety of crimes, example double murder vs. massive fraud.

  • Stuart Armstrong

    So who wants to start a crusade against right-handed harm?

    If the step after that is to lauch a crusade against left-handed harm, count me in!

    Similarly, I see nothing wrong with banning torture and accepting violent prisons, if the next step is to work on improving violent prisons. Feelings of moral superiority are fine, if they are a step on the way, not the end of the road.

    On another note, moral norms don’t generally follow economic rules: there is no market to make sure we end up with the least possible harm, at the lowest possible cost (people’s own estimates of what causes them the most long-term harm are pretty poor as well). So it probably makes sense to deal with this issue by blanket bans, periodically updated, rather than seeking the most “efficient” method at a given time. Though I may be wrong – Robin, would some sort of market comparing the degrees of harm in diffrent situations make any sense? Can’t see how that works myself…

  • Stuart Armstrong

    I think this is standard yucky/eww bias -a very important topic

    But where did this “yuck” reaction come from? Throughout history, people have been more than happy to delegate others to torture in their name. Today we have a “yuck” at the thought. It’s not the “yuck” reaction in itself that the problem – it’s only a problem if that “yuck” does not adapt to conceptions of real harm (I’ve now disgusted at massive distortions of the truth – that disgust wasn’t there before). I feel that in a lot of ways, our “yuck” reactions have been getting generally better these last few decades.

    Anyone up for writing an article on the evolution of “yuck”? Would be a fascinating topic…

  • Silas

    Would people be offended by a program that merely *offered* criminals the option to e.g. be tortured for a week instead of imprisoned for a year? Probably yes, but that strikes me as kind of odd. Why would someone accept it as humane, to imprison someone for a year, but *not* accept it as humane to administer a punishment he prefers to that?

    It’s a moral paradox that arises in many contexts. The general format is when someone holds all of these beliefs.

    1) No one is obligated to do action X to person Y.
    2) It is virtuous to perform X to Y.
    3) It is morally wrong to perform action X’, which Y considers better than “not X”, but worse than X.

  • Henry V

    Why is there a presumption that “reducing total harm” is the goal (the goal of whom? society as a whole? individuals?)?

    And, how would this be quantified when individuals have different preferences over different forms of harm (corporal punishment, imprisonment, poverty, etc.)?

    In my opinion, any morality in the end makes an appeal to a higher moral authority.

  • b

    “Most of us are apparently very proud of our moral norm against torture, even though we allow ourselves to impose great harms in other ways. We tend to be disgusted by Muslim torture practices, encouraging us to go to war against such “barbarians.” But I fear such moral norms do more to help us feel superior than to reduce the total amount of harm. ”

    well, there are some problems here:

    1. i’m not sure that we’ve really gone to war to stop (non-tyrannical) torture practices. for instance, outside of the offices of the weekly standard, no one wants to go to war against saudi arabia.

    2. henry v got it exactly right. harm reduction is not a universally shared goal, especially not when considering punishments. for instance, many people probably subscribe at least implicitly to retributivist or ‘kantian’ notions of punishment. you should be punished in X way, if what you did merits it. (i think kant, who actually said, the only justification of punishment is desert, was a big fan of capital if not corporal punishment).

  • itchy

    Why is there a presumption that “reducing total harm” is the goal

    Uh, perhaps because without this presumption, there isn’t a crime in the first place.

    In my opinion, any morality in the end makes an appeal to a higher moral authority.

    Higher than what? Yes, any morality makes presumptions. And, without any morality, there is no reason to hold anyone responsible for any action, so this discussion would be moot.

    As I commented, rather late in the game, in the previous post, for me, imprisonment is not about punishment. So perhaps this entire thread is just a devil’s advocate game and isn’t for my benefit. But, should the US seriously discuss the option to torture, I’d have many arguments against, not just stemming from my “bias” — unless, as is implied, my bias is for less suffering.

  • itchy

    Most of us are apparently very proud of our moral norm against torture

    And what a bizarre phrase to use. Most of us are “very proud” of this? Really? I’m no more proud of my norm against torture than I am of my reluctance to rape and murder.

    Yes, it’s a daily struggle, but I manage to persevere.

    I picture this conversation:

    “Dude, congratulations.”
    “For what?”
    “You didn’t torture anyone today, right?”
    “Didn’t condone torture?”
    “Sweet, that’s, what, 6 days of clean living now?”

  • rcriii

    I think that Stuart is on the right track. There are two ways to eliminate the absurdity of a ban on ‘right-handed harm’. One way is to allow right-handed harm, the other is to ban left handed harm. For me the choice is obvious.

  • Harjit Bhogal

    Surely the society in which no one ever uses their right hand to cause harm would in fact have less harm done in it than the society in which there is no such moral norm, I for one would find it quite difficult to beat someone up using only one hand.

    This isn’t just a glib comment, this is precisely the way that moral norms work. If you can’t torture then this limits the ways that you can do harm and should reduce the total amount of harm (I’m assuming the kind of consequentialism that Robin seems to be). And this is exactly how we want norms of this kind to work. Of course they don’t work perfectly and harm can be done in other ways but this is no argument against this particular norm.

    We could reject our moral norms and simply try and minimise the amount of harm we do directly, but this seems practically unfeasible, we need to form dispositions to help us make decisions instead of directly considering the consequences of all our actions. It seems that the disgust that we have for harming people in such an extreme way and the respect we have for other humans, even in the extreme circumstances under which torture is normally an option, are things that are very helpful for us in our quest to reduce the amount of harm causes. And as such we should be very proud of such moral norms.

    Of course this is not to say that cultures with conflicting moral norms are obviously immoral and barbarian, and Robin makes this point very well. But I think it is still the case that the moral norms that we have against things like torture are very good things for our society and we should defend them strongly.

  • prison is thought to serve more then one purpose in the overall reduction of crime (a species of the harm that Robin seeks to reduce) and those purposes won’t be served by torture. Similarly, while it would have been nice to to torture a few of the murderers in Iraq, it was probably just easier to drop a bomb on them.

    What is an example of non-torture harm that is truly equivalent to torture harm?

  • Paul Gowder

    Come on, Robin. That’s a really bad rhetorical strategy: substitute an arbitrary distinction (left vs. right-handed behavior) for a non-arbitrary one (torture vs. other kinds of punishment) and imply that since we can’t find a good reason to maintain the first, we also ought not to maintain the second.

    Also, once again you assume that the correct normative ethical theory is utilitarianism. Suppose we’re instead concerned about other things, like some notion of encouraging healthy character development? If we see an overriding concern for, say, not producing the sorts of persons who carry out the torture (i.e. people who are either themselves traumatized or who have their senses of empathy just cauterized away), then we might choose to protect that notwithstanding the fact that it fails to maximize some social utility function. Hmm?

  • Paul, I honestly don’t see why the usual distinction isn’t also arbitrary. Why not also be concerned about the sort of people who carry out prison and fines?

    guy in, the claim is that a mixture of torture and non-jail punishments can produce a similar mix of relevant effects to the current mix of jail and non-torture punishments.

  • Robin- Thank you for your response.

    One purpose of imprisonment is rehabilitation. Presumably, a prisoner has time for contemplation, time to access social and religious programs and time to undergo normal maturation that will assist him in returning as a beneficial participant in the community. I believe torture would be a poor substitute (see, contra, “A Clockwork Orange”) since the person would be released into society.

    This is largely a factual digression and its accuracy is besides the point, because the theoretical arguments you and James Miller seem to be making are based on what people are thinking. Are most of us very proud of our moral norm against torture, or do we think it a less effective means of administering justice and reduction of harm? Without effectiveness or useful purpose, torture appears merely sadistic. I think most Americans actually would support torture over imprisonment where they think it becomes useful—such as imminent threats. Similarly, its seems that the widespread support of the death penalty shows that the government delivering physical harm does not violate any moral norm.

  • Douglas Knight

    Robin Hanson,
    the distinctions we make, such as between imprisonment and corporal punishment, may be lousy, but they are not arbitrary.

    A consequentialist (and not just a utilitarian) would say that the distinction between commission and omission is arbitrary. But most people are not consequential. Given how the US uses prison violence, it’s hard for me call it a crime of omission.

  • What Paul Gowder said. You are attempting to abstract the reality of torture into some larger category of “harm”, implying that we can readily compare the different levels of harm caused by a variety of actions and policies. That strikes me as complete nonsense. Torture is torture, not something else, and harm and pain cannot be traded around like so many yard goods.

    Robin said:
    I honestly don’t see why the usual distinction isn’t also arbitrary.

    Life is full of arbitrary distinctions. We draw a line between childhood and adulthood at age 18 or so, despite there being no overwhelming change happening at that age. Drawing a line between harm in the form of a fine or loss of freedom and harm that involves causing physical pain doesn’t seem that hard to me. There will be borderline cases (ie, does prolong sensory deprivation which causes irreversible psychological damage count?), but that doesn’t mean we can’t draw the line somewhere. Indeed, we must.

    BTW we (Americans) have no cause for pride, since our government has been employing and promoting torture for decades.

  • Paul Gowder


    I think we are concerned with the character of those who imprison, etc. Consider Zimbardo’s famous prison study, as well as Abu Ghraib, which suggested that the characters of those who participate in imprisoning others also get warped in distressing ways. But it’s interesting — the way they get warped is that they … turn into torturers! Torture is what people inflicting other kinds of punishment do when they go wrong, when they escape the controls of law, morality, and social norm.

    I think that highlights the nonarbitrariness of the distinction between torture and other forms of punishment. Torture is unique in that it is necessarily, by definition, the gratuitous infliction of suffering on another. The person doing the torture can’t appeal to any other reason for the behavior — there’s no “I’m not locking you up to make you feel pain, I’m doing so to keep you off the streets” rationalization. They have to consciously and intentionally hurt someone else for precisely that purpose. It’s qualitatively different from other kinds of punishment.

    It’s interesting, in this context, that the deliberate infliction of suffering on animals is a useful early marker for sociopathy.* What does it mean that the willingness to torture an animal — not to imprison one (which everyone who has ever owned a caged bird has done) — is correlated with having one’s empathetic screws loose? Perhaps that torture is worse — is harder for humans with a working sense of empathy to accept, makes it harder to maintain a working sense of empathy — than imprisonment etc.?

    * See the DSM, and also Clifton P. Flynn. 2000. “Why Family Professionals Can No Longer Ignore Violence toward Animals” _Family Relations_ 49:87-95.

  • Curt Welch

    Henry V wrote:

    Why is there a presumption that “reducing total harm” is the goal (the goal
    of whom? society as a whole? individuals?)?

    And, how would this be quantified when individuals have different preferences
    over different forms of harm (corporal punishment, imprisonment, poverty,

    In my opinion, any morality in the end makes an appeal to a higher
    moral authority.

    I think these are excellent questions that must be answered before we can come to any real agreement on moral issues. I think the reason we have such problem coming to an agreement on these issues, is because it’s the operation of the human brain that is the source of all this confusion. And since we don’t have agreement in our society on what the brain is doing, we can’t agree on a workable foundation to answer these most important of questions.

    I will however give you what I believe to be the the answer to what the brain is doing, and how it leads to answers to these sorts of questions. I’m interested in these topics because of my interest in creating AI. I can’t create AI until I can answer the question of what the brain is doing. My current best guess as to what the brain does, and how it works, however leads to many very interesting potential answers to social questions such a morality issues.

    I believe that the part of the brain responsible for the production of all our voluntary behavior is just a reinforcement learning machine. As such, the goal of such a machine is simply to try and maximise all future rewards, as defined by a genetically created definition of good and bad. Good for us are all those low level things we are genetically predisposed to be attracted to (pleasure), and bad are the things we are wired to avoid (the stuff we call pain). As such the purpose of the brain is not actually survival. It’s direct purpose is simply the avoidance of pain, and the production of pleasure. Indirectly this produces an increased odds of survival in us only because of what evolution has hard-wired into us as prime measures of pain and pleasure.

    This distinction is important because it changes our understand of our purpose in life. Our genes what us to help them survive (using Dawkins’ view), but the purpose of our brain, is not the purpose of the entire body. And what we normally call, “our purpose” is in fact the purpose of the brain, because my foot is not what is writing this message to you, it’s my brain. I am, a brain talking to you, and my purpose, is simply to maximise my own long term pleasure and minimize my long term pain.

    This makes it easy to understand why we choose to use birth control. It’s because our purpose is not to reproduce. It’s to produce long term pleasure. Birth control is one way to increase our long term pleasure. We get all the pleasure of having sex, without the pain of producing a child when we don’t want to produce a child. Our selfish genes might not be “happy” with this, but that’s their problem, not ours. They are the ones that wired us to like sex, and wired us to be a pleasure seeking machine. They did it because for the most part, it greatly increases their odds of survival. But our purpose is not survival, it’s pleasure.

    In the long term, if our use of birth control reduces our odds of survival, evolution and natural selection will re-design humans in some way to make future humans do a better job of reproducing. But that is not our problem. Our problem, as given to us by the way evolution designed us, is to just do whatever we can, to be happy, until the day we die.

    So with that foundation, lets go back to Henry’s question and see what type of answers we can produce.

    Why is there a presumption that “reducing total harm” is the goal (the goal
    of whom? society as a whole? individuals?)?

    From my perspective, we are brains built for the purpose of minimizing total pain – which because of how we are wired by evolution, is a close match to reducing total harm. That is, most things that harm us, cause us pain, and we are machines built to avoid pain. But we are built for the most part, only to reduce harm to ourselves. However, it’s easy to see that for anything in our environment that acts as a tool for helping us reduce our own harm, that we should also do what we can to reduce harm to it. I don’t want my car to be harmed, because my car helps me prevent future pain. And for the same reason, I don’t want other people to be harmed, because for the most part, other people help me reduce harm to myself as well. So it’s easy to generalize from our prime goal of reducing personal pain, to a general rule of reducing total pain to all humans.

    And, how would this be quantified when individuals have different preferences
    over different forms of harm (corporal punishment, imprisonment, poverty,

    That’s very hard to do. The brain, based on how it actually operates, has to quantify everything. If we could accurately scan a person’s brain, we could measure actual levels of pain associated with different experiences and use that as a starting point for making social decisions. But as you say, everyone in the society will have a different level of pain for different events. And we aren’t only talking about simple physical pain of someone being tortured. We are talking about the indirect forms of pain that we all feel, if we simply know that someone is being tortured. So when we harm someone, such as by torture, we are not only creating that pain in them, but we are also creating pain in everyone in the society who feels pain simply at knowing they have allowed someone to be tortured.

    When we outlaw torture, it’s not really an issue of how much pain the person being tortured is feeling. It’s our own selfish need to not feel bad about letting it happen to them. We outlaw torture more because of the fact it causes us pain, than because of the pain felt by the individual. Part of that pain of course is the fear that we might find ourselves on the receiving end of that torture some day. The harm caused to one person being tortured is not nearly as bad as the total pain felt by 300 million people knowing they have allowed the guy to be tortured.

    In my opinion, any morality in the end makes an appeal to a higher
    moral authority.

    Yes, but I think that higher morality stops at the brain. The bottom line for me, is that I only care about cause me pain and pleasure, and that’s defined by the physical operation of my brain. But it so happens, that seeing other people in pain, especially the people that mean a lot to me, causes me a lot of pain. So, to reduce my pain, I need to reduce their pain, and this need extends out far and wide to not only humans, but to some extent animals, and plants, and bugs, and the environment. The people and things that most directly effect my future I care the most about, but I care to some degree about just about everything on the planet because I understand that things that happen to even the rocks on the planet, might come back to cause me pain one day.

    The foundation of my morality (aka what I care about and what I sense as good and bad) was built into me by evolution. I don’t need to appeal to a higher morality than my own brain because my own brain and how it’s wired is the source, and the root definition of what is good and what is bad in the universe to me. There is no higher authority than that for me to appeal to when I look for what is right, and what is wrong.

    So, you see, by understanding what the brain is doing, I believe we can answer these questions, which otherwise, have no foundation of understanding. And though not everyone agrees with my foundation, I think you can see that when we figure out what the brain is doing, we will have a foundation of what morality is, and why humans see some things as bad, and some as good. If we understand that foundation, we should be able to make better social decisions. And a key one I see, is this idea of replacing the scientific, and evolution inspired goal of survival, with the better evolution, and scientific inspired goal of maximizing happiness. Evolution built humans as survival machines, but it built brains to be pleasure maximising machines. And as a collection of brains talking to each other, and trying to figure out what morals are, we should understand that our prime purpose is maximising happiness, and not survival. And as such, we need to pay more attention to the total happiness of all living humans, and worry less about our struggle for survival. Dieing isn’t bad as long as it doesn’t come with pain. If we could learn to tap into our pleasure centers, and stimulate ourselves with pure pleasure, that would be an ideal way to die. It would be like going to heaven – a place of pure pleasure and no pain. When we have to die, that would be the way to go. And if we as a society decide we need to kill people, that would be by far the most humane way to do it. We kill them by sending them to heaven – by removing all pain from their life – until the point their body stops working and they die. That’s just one example of how a better understanding of what the brain is doing might create some big changes in how we view things like capital punishment. I think that simply understanding that humans are reinforcement learning machines is enough to explain what the true foundation of all our morals are, and a good enough foundation to allow us to make better social decisions. But before this can be useful, we need more people to understand what this means, and so far, I had no end of problems finding people to agree with the idea that humans are reinforcement learning machines. They are too biased in their beliefs that humans are something more complex than this to be able to accept this sort of answer – or even give it serious consideration.

  • Henry V

    Henry V: “Why is there a presumption that “reducing total harm” is the goal?”

    itchy: “Uh, perhaps because without this presumption, there isn’t a crime in the first place.”

    I think you’ve implicitly equated “reduction of total harm is not the goal” and “harm is not a bad thing.” There may be other goals that (in some cases) conflict with reduction of total harm. To suggest that reduction of total harm is the goal, then one must suggest why this normative value exists. From what moral authority did it generate?

    Henry V: In my opinion, any morality in the end makes an appeal to a higher moral authority.

    itchy: Higher than what? Yes, any morality makes presumptions. And, without any morality, there is no reason to hold anyone responsible for any action, so this discussion would be moot.

    Higher than the one making the claim, I imagine. I’m not saying I’m opposed to this line of reasoning. But, I am saying that it should be explicit rather than implicit in anyone’s argument, including Robin’s. I’m curious as to what Robin’s source of morality is. Are certain things right and certain things wrong, or not?

  • Pingback: Overcoming Bias : Consider Exile()