Discover more from Overcoming Bias
Who Told You Moral Questions Would be Easy?
In addition to (allegedly) scope insensitivity and "motivated continuation," I would like to suggest that the incredibly active discussion on the torture vs. specks post is also driven in part by a bias toward, well, toward closure; a bias toward determinate answers: a bias toward decision procedures that are supposed to yield an answer in every case, and one that can be implemented by humans in the world in which we live and with the biological and social pressures that we face.
That’s the wonderful thing about the kinds of utilitarian intuitions that tell us, deep in our brains, that we can aggregate a lot of pain and pleasure of different kinds among different people and come up with some kind of scalar representing the net sum of "utility" to be compared to some other scalar for some other pattern of events in some possible world; the scalars to be compared to determine which world is morally better, and to which world our efforts should be directed. Those intuitions always generate a rationalizable answer.
If we demand that our moral questions have answers of that type, comments like Eliezer’s start to look very appealing. Eliezer says that it’s irrational to "impose and rationalize comfortable moral absolutes in defiance of expected utility." But if that’s so, Eliezer owes us an argument for why moral judgments make sense in terms of expected utility. Or why they make sense in terms of any decision theoretic calculation at all. Or why they have to make sense in terms of any overall algorithmic procedure of any kind. To simply assume that decision theory applies to moral questions, that there’s something — utility, goodness, moral worth, whatever — to maximize is to beg the question right from the start. And that’s bias if anything is.
One can easily argue that making decision theoretic arguments about moral questions is a massive category error. On that view, it’s not irrational to blink away the dust speck even if you believe that no quantity of dispersed dust equals even a moment of torture, and even if there’s some action you could take instead with a 1/3^^^3 chance of saving someone from torture. (We don’t need to mess around with hacks about hyperreal numbers and the like.) Likewise, it’s not irrational to spend 3^^^3 dollars to save a single human life, or to decline to spend that money, because, yup, the sort of rationality that compares values in this way just doesn’t apply to moral questions.
But broader than that, there simply might not be a decision procedure at all. I think the sort of people who are drawn to the sorts of Bentham-by-pocket-calculator reasoning that we see here are revealing a serious discomfort with the idea that there might be moral questions that are not easily resolved by some kind of algorithm. Or that might not be easily resolved period, or resolved at all. There might be moral paradoxes. There might be irreconcilable moral conflicts. Normative truth might not follow the same laws that descriptive truth follows.
There’s a classic example in moral philosophy, thanks to Sartre. A young man in Vichy France has to choose between caring for his mother and fighting for the Resistance. Those who drink the decision theoretic kool-aid are committed to the notion that there’s some kind of calculation that’s possible in principle to determine which duty has a higher value. But it’s really hard to swallow that claim. Many people feel very strongly, when confronted by that example, that the poor young man is blamable for any decision he makes, for any decision must neglect some duty. He has had bad moral luck.
Incidentally, that example also shows that anyone who thinks moral absolutes are "comfortable" is seriously misinformed. Consider torture again. It’s hardly comfortable to stick to the morally absolute position that one can’t (in the classic hypothetical case) torture a terrorist to find out the location of the nuclear bomb in New York City. That’s a hard position. Anyone taking that position suffers, internally and externally, and suffers a lot. It’s a lot easier to fall into "expected utility" and justify the torture. So please don’t insult people who accept deontological moral theories by suggesting that they’re (we’re) hiding in a "comfortable" position.