In addition to (allegedly) scope insensitivity and "motivated continuation," I would like to suggest that the incredibly active discussion on the torture vs. specks post is also driven in part by a bias toward, well, toward closure; a bias toward determinate answers: a bias toward decision procedures that are supposed to yield an answer in every case, and one that can be implemented by humans in the world in which we live and with the biological and social pressures that we face.
That’s the wonderful thing about the kinds of utilitarian intuitions that tell us, deep in our brains, that we can aggregate a lot of pain and pleasure of different kinds among different people and come up with some kind of scalar representing the net sum of "utility" to be compared to some other scalar for some other pattern of events in some possible world; the scalars to be compared to determine which world is morally better, and to which world our efforts should be directed. Those intuitions always generate a rationalizable answer.
If we demand that our moral questions have answers of that type, comments like Eliezer’s start to look very appealing. Eliezer says that it’s irrational to "impose and rationalize comfortable moral absolutes in defiance of expected utility." But if that’s so, Eliezer owes us an argument for why moral judgments make sense in terms of expected utility. Or why they make sense in terms of any decision theoretic calculation at all. Or why they have to make sense in terms of any overall algorithmic procedure of any kind. To simply assume that decision theory applies to moral questions, that there’s something — utility, goodness, moral worth, whatever — to maximize is to beg the question right from the start. And that’s bias if anything is.
One can easily argue that making decision theoretic arguments about moral questions is a massive category error. On that view, it’s not irrational to blink away the dust speck even if you believe that no quantity of dispersed dust equals even a moment of torture, and even if there’s some action you could take instead with a 1/3^^^3 chance of saving someone from torture. (We don’t need to mess around with hacks about hyperreal numbers and the like.) Likewise, it’s not irrational to spend 3^^^3 dollars to save a single human life, or to decline to spend that money, because, yup, the sort of rationality that compares values in this way just doesn’t apply to moral questions.
But broader than that, there simply might not be a decision procedure at all. I think the sort of people who are drawn to the sorts of Bentham-by-pocket-calculator reasoning that we see here are revealing a serious discomfort with the idea that there might be moral questions that are not easily resolved by some kind of algorithm. Or that might not be easily resolved period, or resolved at all. There might be moral paradoxes. There might be irreconcilable moral conflicts. Normative truth might not follow the same laws that descriptive truth follows.
There’s a classic example in moral philosophy, thanks to Sartre. A young man in Vichy France has to choose between caring for his mother and fighting for the Resistance. Those who drink the decision theoretic kool-aid are committed to the notion that there’s some kind of calculation that’s possible in principle to determine which duty has a higher value. But it’s really hard to swallow that claim. Many people feel very strongly, when confronted by that example, that the poor young man is blamable for any decision he makes, for any decision must neglect some duty. He has had bad moral luck.
Incidentally, that example also shows that anyone who thinks moral absolutes are "comfortable" is seriously misinformed. Consider torture again. It’s hardly comfortable to stick to the morally absolute position that one can’t (in the classic hypothetical case) torture a terrorist to find out the location of the nuclear bomb in New York City. That’s a hard position. Anyone taking that position suffers, internally and externally, and suffers a lot. It’s a lot easier to fall into "expected utility" and justify the torture. So please don’t insult people who accept deontological moral theories by suggesting that they’re (we’re) hiding in a "comfortable" position.
There's a central question that I think it is useful to address directly to some extent for this discussion: what is morality? What does it mean for a choice to be "immoral"? This is a question of definition really, so the answer is somewhat subjective.
I think "moral" and "immoral" are complex terms in themselves: almost(?) anyone could increase their net benefit to others, if they made personal sacrifices to do so, as most people do not have as their foremost goal to maximize their benefit to others at all times. Is it "immoral" not to hold altruism as the goal of highest priority in all cases? It is merely less moral than the highest possible degree of morality. I think it is usually more useful to speak of morality of choices relative to their alternatives, than by using vaguely-defined thresholds for a "moral" choice versus an "immoral" choice - language which I think can lead to a lot of confusion.
Morality, to me, refers to how one's actions affect one's expectation for another's expected utility. That is, in a simple world with only Alice and Bob, it is immoral for Alice to act in a way that she would expect to reduce (Bob's expected utility, according to Bob). The morality of Alice's choice X can be described thus:
Ma: Morality for AliceX: a certain optionPa: Alice's perceptionUb: Bob's expected utilityUba: Bob's expected utility, as Alice would predict it
Ma(X) = Uba(X) = Pa(Ub(X))
Though this quantitative view is a simple model that provides a way of analysing decisions from a moral standpoint, it still does not make answering moral question easy: the difficult part is counting not just Alice's expectation of the effect of a decision on Bob, but on everyone affected (including Alice) - a decision's overall morality is clearly determined by its effects on everyone affected.
It would seem that summing one's expected effect on all people involved is the proper way to determine overall morality, as such a system has the desirable property that a decision that affects ten bystanders in a certain way is morally equivalent to a smaller decision that affects each of the ten people individually, in the same way.
In order to perform such a summation, it is necessary to normalize other people's expected utilities - or equivalently, to measure them all in absolute, interchangeable units of effect. While this is impossible to do perfectly, I think it is quite possible to approximate.
Note that this morality function is not affected by benefit or harm to the decision-maker. This may seem like a flaw; is it not more moral for Alice to harm Bob slightly if she must do so to receive some great benefit, than it is for Alice to do the same harm for very little benefit to herself? No, I think it is not; this kind of decision generally exposes the value Alice places on morality: in the former situation, Alice may make the decision even if she places fairly a high price on morality, while the latter situation suggests that Alice is willing to take immoral action for a lower price - the morality of the choice is not affected by such context, though what it may reveal about Alice's morality is (and the two are easily confused).
One possible objection to this system would be the idea that this would mean that it is morally neutral to help Hitler achieve his goals at the expense of hindering Gandhi, each to such an extent that their expected utilities are affected by the same (but opposite) amount. Not so! This quantitative approach requires taking into account all foreseeable effects, including indirect effects... which is part of the reason moral questions aren't easy.
So I agree that moral questions aren't easy, but I don't think it's because it's difficult to reason about morality abstractly; in my opinion the difficulty is mostly in forecasting the effects of one's decisions, which is a problem squarely in the realm of decision theory - and thus the methods of doing so are well studied.
I would put Paul's question another way. If, as many people (including myself) believe that what we call moral systems are actually rules of thumb generated by genetic machinery that evolved to ensure group survival in the early days of humanity, then we should not expect to be able to do any moral calculus. The moral rules of thumb we have were not designed to be consistent or "right" in any fundamental sense, they are there only to ensure that the group worked together, rules that didn't work didn't survive. As a result, in todays modern world, where we are faced with many different situations that our ancestors were not faced with, it is to be expected that we will encounter many moral paradoxes. If genetic rules of thumbs are really where morality originates it will be especially easy (trivial) to generate hypothetical examples of moral paradoxes, or difficult moral calculations, as we have seen on this blog.
On the question of whether decision theory is useful in moral calculations, I would say there is an analogy with economics, where the utility question is simplified to be optimisation of a particular chosen variable, usually money. Once you have decided to optimise that variable, then a calculation becomes possible. But economics has no say in whether one variable should be optimised over another one.