25 Comments

Henry V: "Why is there a presumption that "reducing total harm" is the goal?"

itchy: "Uh, perhaps because without this presumption, there isn't a crime in the first place."

I think you've implicitly equated "reduction of total harm is not the goal" and "harm is not a bad thing." There may be other goals that (in some cases) conflict with reduction of total harm. To suggest that reduction of total harm is the goal, then one must suggest why this normative value exists. From what moral authority did it generate?

Henry V: In my opinion, any morality in the end makes an appeal to a higher moral authority.

itchy: Higher than what? Yes, any morality makes presumptions. And, without any morality, there is no reason to hold anyone responsible for any action, so this discussion would be moot.

Higher than the one making the claim, I imagine. I'm not saying I'm opposed to this line of reasoning. But, I am saying that it should be explicit rather than implicit in anyone's argument, including Robin's. I'm curious as to what Robin's source of morality is. Are certain things right and certain things wrong, or not?

Expand full comment

Henry V wrote:

Why is there a presumption that "reducing total harm" is the goal (the goalof whom? society as a whole? individuals?)?

And, how would this be quantified when individuals have different preferencesover different forms of harm (corporal punishment, imprisonment, poverty,etc.)?

In my opinion, any morality in the end makes an appeal to a highermoral authority.

I think these are excellent questions that must be answered before we can come to any real agreement on moral issues. I think the reason we have such problem coming to an agreement on these issues, is because it's the operation of the human brain that is the source of all this confusion. And since we don't have agreement in our society on what the brain is doing, we can't agree on a workable foundation to answer these most important of questions.

I will however give you what I believe to be the the answer to what the brain is doing, and how it leads to answers to these sorts of questions. I'm interested in these topics because of my interest in creating AI. I can't create AI until I can answer the question of what the brain is doing. My current best guess as to what the brain does, and how it works, however leads to many very interesting potential answers to social questions such a morality issues.

I believe that the part of the brain responsible for the production of all our voluntary behavior is just a reinforcement learning machine. As such, the goal of such a machine is simply to try and maximise all future rewards, as defined by a genetically created definition of good and bad. Good for us are all those low level things we are genetically predisposed to be attracted to (pleasure), and bad are the things we are wired to avoid (the stuff we call pain). As such the purpose of the brain is not actually survival. It's direct purpose is simply the avoidance of pain, and the production of pleasure. Indirectly this produces an increased odds of survival in us only because of what evolution has hard-wired into us as prime measures of pain and pleasure.

This distinction is important because it changes our understand of our purpose in life. Our genes what us to help them survive (using Dawkins' view), but the purpose of our brain, is not the purpose of the entire body. And what we normally call, "our purpose" is in fact the purpose of the brain, because my foot is not what is writing this message to you, it's my brain. I am, a brain talking to you, and my purpose, is simply to maximise my own long term pleasure and minimize my long term pain.

This makes it easy to understand why we choose to use birth control. It's because our purpose is not to reproduce. It's to produce long term pleasure. Birth control is one way to increase our long term pleasure. We get all the pleasure of having sex, without the pain of producing a child when we don't want to produce a child. Our selfish genes might not be "happy" with this, but that's their problem, not ours. They are the ones that wired us to like sex, and wired us to be a pleasure seeking machine. They did it because for the most part, it greatly increases their odds of survival. But our purpose is not survival, it's pleasure.

In the long term, if our use of birth control reduces our odds of survival, evolution and natural selection will re-design humans in some way to make future humans do a better job of reproducing. But that is not our problem. Our problem, as given to us by the way evolution designed us, is to just do whatever we can, to be happy, until the day we die.

So with that foundation, lets go back to Henry's question and see what type of answers we can produce.

Why is there a presumption that "reducing total harm" is the goal (the goalof whom? society as a whole? individuals?)?

From my perspective, we are brains built for the purpose of minimizing total pain - which because of how we are wired by evolution, is a close match to reducing total harm. That is, most things that harm us, cause us pain, and we are machines built to avoid pain. But we are built for the most part, only to reduce harm to ourselves. However, it's easy to see that for anything in our environment that acts as a tool for helping us reduce our own harm, that we should also do what we can to reduce harm to it. I don't want my car to be harmed, because my car helps me prevent future pain. And for the same reason, I don't want other people to be harmed, because for the most part, other people help me reduce harm to myself as well. So it's easy to generalize from our prime goal of reducing personal pain, to a general rule of reducing total pain to all humans.

And, how would this be quantified when individuals have different preferencesover different forms of harm (corporal punishment, imprisonment, poverty,etc.)?

That's very hard to do. The brain, based on how it actually operates, has to quantify everything. If we could accurately scan a person's brain, we could measure actual levels of pain associated with different experiences and use that as a starting point for making social decisions. But as you say, everyone in the society will have a different level of pain for different events. And we aren't only talking about simple physical pain of someone being tortured. We are talking about the indirect forms of pain that we all feel, if we simply know that someone is being tortured. So when we harm someone, such as by torture, we are not only creating that pain in them, but we are also creating pain in everyone in the society who feels pain simply at knowing they have allowed someone to be tortured.

When we outlaw torture, it's not really an issue of how much pain the person being tortured is feeling. It's our own selfish need to not feel bad about letting it happen to them. We outlaw torture more because of the fact it causes us pain, than because of the pain felt by the individual. Part of that pain of course is the fear that we might find ourselves on the receiving end of that torture some day. The harm caused to one person being tortured is not nearly as bad as the total pain felt by 300 million people knowing they have allowed the guy to be tortured.

In my opinion, any morality in the end makes an appeal to a highermoral authority.

Yes, but I think that higher morality stops at the brain. The bottom line for me, is that I only care about cause me pain and pleasure, and that's defined by the physical operation of my brain. But it so happens, that seeing other people in pain, especially the people that mean a lot to me, causes me a lot of pain. So, to reduce my pain, I need to reduce their pain, and this need extends out far and wide to not only humans, but to some extent animals, and plants, and bugs, and the environment. The people and things that most directly effect my future I care the most about, but I care to some degree about just about everything on the planet because I understand that things that happen to even the rocks on the planet, might come back to cause me pain one day.

The foundation of my morality (aka what I care about and what I sense as good and bad) was built into me by evolution. I don't need to appeal to a higher morality than my own brain because my own brain and how it's wired is the source, and the root definition of what is good and what is bad in the universe to me. There is no higher authority than that for me to appeal to when I look for what is right, and what is wrong.

So, you see, by understanding what the brain is doing, I believe we can answer these questions, which otherwise, have no foundation of understanding. And though not everyone agrees with my foundation, I think you can see that when we figure out what the brain is doing, we will have a foundation of what morality is, and why humans see some things as bad, and some as good. If we understand that foundation, we should be able to make better social decisions. And a key one I see, is this idea of replacing the scientific, and evolution inspired goal of survival, with the better evolution, and scientific inspired goal of maximizing happiness. Evolution built humans as survival machines, but it built brains to be pleasure maximising machines. And as a collection of brains talking to each other, and trying to figure out what morals are, we should understand that our prime purpose is maximising happiness, and not survival. And as such, we need to pay more attention to the total happiness of all living humans, and worry less about our struggle for survival. Dieing isn't bad as long as it doesn't come with pain. If we could learn to tap into our pleasure centers, and stimulate ourselves with pure pleasure, that would be an ideal way to die. It would be like going to heaven - a place of pure pleasure and no pain. When we have to die, that would be the way to go. And if we as a society decide we need to kill people, that would be by far the most humane way to do it. We kill them by sending them to heaven - by removing all pain from their life - until the point their body stops working and they die. That's just one example of how a better understanding of what the brain is doing might create some big changes in how we view things like capital punishment. I think that simply understanding that humans are reinforcement learning machines is enough to explain what the true foundation of all our morals are, and a good enough foundation to allow us to make better social decisions. But before this can be useful, we need more people to understand what this means, and so far, I had no end of problems finding people to agree with the idea that humans are reinforcement learning machines. They are too biased in their beliefs that humans are something more complex than this to be able to accept this sort of answer - or even give it serious consideration.

Expand full comment

Robin,

I think we are concerned with the character of those who imprison, etc. Consider Zimbardo's famous prison study, as well as Abu Ghraib, which suggested that the characters of those who participate in imprisoning others also get warped in distressing ways. But it's interesting -- the way they get warped is that they ... turn into torturers! Torture is what people inflicting other kinds of punishment do when they go wrong, when they escape the controls of law, morality, and social norm.

I think that highlights the nonarbitrariness of the distinction between torture and other forms of punishment. Torture is unique in that it is necessarily, by definition, the gratuitous infliction of suffering on another. The person doing the torture can't appeal to any other reason for the behavior -- there's no "I'm not locking you up to make you feel pain, I'm doing so to keep you off the streets" rationalization. They have to consciously and intentionally hurt someone else for precisely that purpose. It's qualitatively different from other kinds of punishment.

It's interesting, in this context, that the deliberate infliction of suffering on animals is a useful early marker for sociopathy.* What does it mean that the willingness to torture an animal -- not to imprison one (which everyone who has ever owned a caged bird has done) -- is correlated with having one's empathetic screws loose? Perhaps that torture is worse -- is harder for humans with a working sense of empathy to accept, makes it harder to maintain a working sense of empathy -- than imprisonment etc.?

* See the DSM, and also Clifton P. Flynn. 2000. "Why Family Professionals Can No Longer Ignore Violence toward Animals" _Family Relations_ 49:87-95.

Expand full comment

What Paul Gowder said. You are attempting to abstract the reality of torture into some larger category of "harm", implying that we can readily compare the different levels of harm caused by a variety of actions and policies. That strikes me as complete nonsense. Torture is torture, not something else, and harm and pain cannot be traded around like so many yard goods.

Robin said:I honestly don't see why the usual distinction isn't also arbitrary.

Life is full of arbitrary distinctions. We draw a line between childhood and adulthood at age 18 or so, despite there being no overwhelming change happening at that age. Drawing a line between harm in the form of a fine or loss of freedom and harm that involves causing physical pain doesn't seem that hard to me. There will be borderline cases (ie, does prolong sensory deprivation which causes irreversible psychological damage count?), but that doesn't mean we can't draw the line somewhere. Indeed, we must.

BTW we (Americans) have no cause for pride, since our government has been employing and promoting torture for decades.

Expand full comment

Robin Hanson,the distinctions we make, such as between imprisonment and corporal punishment, may be lousy, but they are not arbitrary.

A consequentialist (and not just a utilitarian) would say that the distinction between commission and omission is arbitrary. But most people are not consequential. Given how the US uses prison violence, it's hard for me call it a crime of omission.

Expand full comment

Robin- Thank you for your response.

One purpose of imprisonment is rehabilitation. Presumably, a prisoner has time for contemplation, time to access social and religious programs and time to undergo normal maturation that will assist him in returning as a beneficial participant in the community. I believe torture would be a poor substitute (see, contra, "A Clockwork Orange") since the person would be released into society.

This is largely a factual digression and its accuracy is besides the point, because the theoretical arguments you and James Miller seem to be making are based on what people are thinking. Are most of us very proud of our moral norm against torture, or do we think it a less effective means of administering justice and reduction of harm? Without effectiveness or useful purpose, torture appears merely sadistic. I think most Americans actually would support torture over imprisonment where they think it becomes useful—such as imminent threats. Similarly, its seems that the widespread support of the death penalty shows that the government delivering physical harm does not violate any moral norm.

Expand full comment

Paul, I honestly don't see why the usual distinction isn't also arbitrary. Why not also be concerned about the sort of people who carry out prison and fines?

guy in, the claim is that a mixture of torture and non-jail punishments can produce a similar mix of relevant effects to the current mix of jail and non-torture punishments.

Expand full comment

Come on, Robin. That's a really bad rhetorical strategy: substitute an arbitrary distinction (left vs. right-handed behavior) for a non-arbitrary one (torture vs. other kinds of punishment) and imply that since we can't find a good reason to maintain the first, we also ought not to maintain the second.

Also, once again you assume that the correct normative ethical theory is utilitarianism. Suppose we're instead concerned about other things, like some notion of encouraging healthy character development? If we see an overriding concern for, say, not producing the sorts of persons who carry out the torture (i.e. people who are either themselves traumatized or who have their senses of empathy just cauterized away), then we might choose to protect that notwithstanding the fact that it fails to maximize some social utility function. Hmm?

Expand full comment

prison is thought to serve more then one purpose in the overall reduction of crime (a species of the harm that Robin seeks to reduce) and those purposes won't be served by torture. Similarly, while it would have been nice to to torture a few of the murderers in Iraq, it was probably just easier to drop a bomb on them.

What is an example of non-torture harm that is truly equivalent to torture harm?

Expand full comment

Surely the society in which no one ever uses their right hand to cause harm would in fact have less harm done in it than the society in which there is no such moral norm, I for one would find it quite difficult to beat someone up using only one hand.

This isn’t just a glib comment, this is precisely the way that moral norms work. If you can’t torture then this limits the ways that you can do harm and should reduce the total amount of harm (I’m assuming the kind of consequentialism that Robin seems to be). And this is exactly how we want norms of this kind to work. Of course they don’t work perfectly and harm can be done in other ways but this is no argument against this particular norm.

We could reject our moral norms and simply try and minimise the amount of harm we do directly, but this seems practically unfeasible, we need to form dispositions to help us make decisions instead of directly considering the consequences of all our actions. It seems that the disgust that we have for harming people in such an extreme way and the respect we have for other humans, even in the extreme circumstances under which torture is normally an option, are things that are very helpful for us in our quest to reduce the amount of harm causes. And as such we should be very proud of such moral norms.

Of course this is not to say that cultures with conflicting moral norms are obviously immoral and barbarian, and Robin makes this point very well. But I think it is still the case that the moral norms that we have against things like torture are very good things for our society and we should defend them strongly.

Expand full comment

I think that Stuart is on the right track. There are two ways to eliminate the absurdity of a ban on 'right-handed harm'. One way is to allow right-handed harm, the other is to ban left handed harm. For me the choice is obvious.

Expand full comment

Most of us are apparently very proud of our moral norm against torture

And what a bizarre phrase to use. Most of us are "very proud" of this? Really? I'm no more proud of my norm against torture than I am of my reluctance to rape and murder.

Yes, it's a daily struggle, but I manage to persevere.

I picture this conversation:

"Dude, congratulations.""For what?""You didn't torture anyone today, right?""No.""Didn't condone torture?""No.""Sweet, that's, what, 6 days of clean living now?"

Expand full comment

Why is there a presumption that "reducing total harm" is the goal

Uh, perhaps because without this presumption, there isn't a crime in the first place.

In my opinion, any morality in the end makes an appeal to a higher moral authority.

Higher than what? Yes, any morality makes presumptions. And, without any morality, there is no reason to hold anyone responsible for any action, so this discussion would be moot.

As I commented, rather late in the game, in the previous post, for me, imprisonment is not about punishment. So perhaps this entire thread is just a devil's advocate game and isn't for my benefit. But, should the US seriously discuss the option to torture, I'd have many arguments against, not just stemming from my "bias" -- unless, as is implied, my bias is for less suffering.

Expand full comment

"Most of us are apparently very proud of our moral norm against torture, even though we allow ourselves to impose great harms in other ways. We tend to be disgusted by Muslim torture practices, encouraging us to go to war against such "barbarians." But I fear such moral norms do more to help us feel superior than to reduce the total amount of harm. "

well, there are some problems here:

1. i'm not sure that we've really gone to war to stop (non-tyrannical) torture practices. for instance, outside of the offices of the weekly standard, no one wants to go to war against saudi arabia.

2. henry v got it exactly right. harm reduction is not a universally shared goal, especially not when considering punishments. for instance, many people probably subscribe at least implicitly to retributivist or 'kantian' notions of punishment. you should be punished in X way, if what you did merits it. (i think kant, who actually said, the only justification of punishment is desert, was a big fan of capital if not corporal punishment).

Expand full comment

Why is there a presumption that "reducing total harm" is the goal (the goal of whom? society as a whole? individuals?)?

And, how would this be quantified when individuals have different preferences over different forms of harm (corporal punishment, imprisonment, poverty, etc.)?

In my opinion, any morality in the end makes an appeal to a higher moral authority.

Expand full comment

Would people be offended by a program that merely *offered* criminals the option to e.g. be tortured for a week instead of imprisoned for a year? Probably yes, but that strikes me as kind of odd. Why would someone accept it as humane, to imprison someone for a year, but *not* accept it as humane to administer a punishment he prefers to that?

It's a moral paradox that arises in many contexts. The general format is when someone holds all of these beliefs.

1) No one is obligated to do action X to person Y.2) It is virtuous to perform X to Y.3) It is morally wrong to perform action X', which Y considers better than "not X", but worse than X.

Expand full comment