34 Comments

Unknown: Quite independantly of your point, it seems to me you have a very peculiar notion of "large".

regards, frank

Expand full comment

N does not need to be particularly large, because the number of possible brain states a human being can have is not particularly large.

In any case, if 3^^^3 is too small, we can always choose Busy Beaver (3^^^3) instead, compared with which 3^^^3 is very, very, very close to zero.

Expand full comment

To Michael Vassar's point re: non-intuitive scales: Does not the scale cut both ways? Even granting:(i) each individual person's pain/disutility function P(x) is continuous for all states x between x1 [= dust speck] and x2 [= 50 years of torture], and(ii) the cumulative disutility is linearly additive across any number of persons N,it is not clear that N*P(x1) > P(x2) for some number N = 3^^^3 or googol or some other large number that exceeds a normal person's ability to form meaningful comparisons.

There seems to be an assumption that the ratio R = P(x2)/P(x1) < N. Why? It is not at all obvious to me that this is the case. Certainly R is large; perhaps so large as to appeal to the use of Knuth's arrow notation or chained arrow notation or some other means of describing unconventionally large numbers; perhaps not. Some have appealed to the observation that at states near x1 and x2, you can make small enough changes to the states such that you can form reasonable judgments as to the size of P(x1) versus M*P(x1+delta). That still leaves us with the question of how many deltas fall between x1 and x2. When you're dealing with two unknown values, determining that one is greater than the other on the basis that the former is greater than any number you've previously conceived of seems silly. Perhaps even a manifestation of a cognitive bias that all unknown values fall within a range of previously conceived-of scales.

Arguments like Unknown's merely illustrate (granting the assumptions above) is that there is some number N for which N > R provided P(x2) is finite and P(x1) is greater than zero, which is trivial given the assumptions.

Expand full comment

Paul, yes, my argument only argues for simplicity, not which form.

Expand full comment

Richard: You're also assuming there's some independent metric of value. Or at least, you seem to be. What if I say that torture is a better state of affairs to you, but dust specks are a better state of affairs to me? Or more powerfully, that I prefer dust specks if the tortured person is someone I care about, and torture otherwise? I can guarantee you that if the choice was between my best friend being tortured and 3^^^3 people I don't know getting the dust specks, I'd prefer option 2. But I don't think this has any particularly interesting results. The interesting questions don't come in until we start asking about agency and moral strictures.

Expand full comment

Correction: considering the two states of affairs, "x persons suffer pain z" and "y persons suffer pain z+e", the second will be preferable, if x>y by a sufficiently large amount, and if e is made small enough.

(In other words, not, y>x.)

Expand full comment

Three points: First, utilitarianism is irrelevant, as Richard points out. I was myself thinking of the torture as inflicted by unintelligent robots or machines, and not by a personal agency. Even if it is a personal agency, as long as it isn't me, which of the two states of affairs is preferable can make a difference to my action, even if I think that torture is always wrong. (This will be explained below, in response to the objection that preference for the dust specks is in blatant contradiction to people's actions.)

Second, in response to Michael's claim that we don't have intuitions about a googol of something: we don't need them. The intuition is that considering the two states of affairs, "x persons suffer pain z" and "y persons suffer pain z+e", the second will be preferable, if y>x by a sufficiently large amount, and if e is made small enough. In other words, it is a general intuition that will have consequences for things involving a googol, but it doesn't need to involve a direct intuition about a googol.

Third, it is not true that people do not act on a preference for the dust specks over the torture, in terms of states of affairs. They do. They simply don't act on this in terms of actions. In this way they prefer "to allow the dust specks" rather than "to inflict torture on someone".

People do prefer the state of affairs where very great harms come to a small number of people rather than states where very small harms come to a very large number of people. For example: suppose everyone's taxes are raised by $10. In this way the US government can raise several billion dollars. Surely with this money it can prevent a few more murders, namely by acting in such a way that the murder rate decreases at least slightly. Do you prefer that we allow the murders that we could have prevented, or that we raise taxes on everyone by $10? People prefer to allow the murders. Notice, however, that no one prefers to be a murderer rather than to pay $10, or even to commit a murder rather than raising taxes. People act like deontologists (whether they are philosophically or not). So they won't choose to perform the harmful action themselves, whatever the consequences. This explains James Miller's point about the assassinations versus the bombing campaign; the assassinations are seen as murders, but not the collateral deaths in the bombing campaign. But at the same time, it shows that people do have a preference for the few concentrated harms, considered as states of affairs, and they act on this preference (for example by being unwilling to pay more taxes.)

Expand full comment

Rolf: I'm starting to think that we're less far apart than I initially thought. I disagree with nothing you said in the last comment: nothing there is inconsistent with my two main objections to the "obvious" correctness of the torture choice, viz. a) "a deontologist doesn't even have to play that game," and b) "it's not that easy to aggregate utility across people." I confess, I have some sympathy to the lexical ordering of outcomes too, but I think Michael's point has convinced me to the contrary.

So while it still seems true that only a utilitarian (modulo aggregation issues) is forced to make the particular choice presented by Eliezer's example, your points are well-taken.

Expand full comment

And that's the difference between torturous pain and torture

Paul, as a side note, if you re-read the comments (for example, this one) I think most of the people who've been replying in the past month are advocating a lexical ordering of outcomes based on their intuitions (which, as Michael Vassar pointed out, fail to take into account the fact that our intuition doesn't understand large numbers, and as James Miller pointed out is in blatant contradiction to their actual actions). Like Richard, I agree that this "unwillingness to do math" phenomenon is somewhat orthogonal to utilitarian vs. deontological arguments. You deontologists still need to contrast outcomes from time to time, and we utilitarians still sometimes get irrationally stubborn and refuse to synchronize our mathematical results with our axioms.

Expand full comment

Uh, for "example" in that last paragraph, read "angle." Obviously my caffeine is wearing off.

Expand full comment

Richard: but I think your example misses the central feature of the original problem, which happens also to be the feature that I think makes this really a debate about the virtues of utilitarianism. And that's the difference between torturous pain and torture. Torture entails the existence of a torturer in a way that torturous pain doesn't. And so in your case, I think a deontologist (assuming arguendo that pain is aggregable across people, etc.) could reasonably choose to stop ii. But I don't think a deontologist could make the same choice if i. were "you are about to torture someone," or "someone is about to be tortured by Torquemada." And I don't think you can offer me a case where the agency behind torture doesn't make the difference.

I suppose I'm really ducking your point here, which is that sure, the evaluation of states of affairs is sometimes relevant to people other than utilitarians. But I don't feel many compunctions about that move, because I think the original problem is one that utilitarians and deontologists have to disagree on, just because they're utilitarians and deontologists and it's about torturing people. Deontologists (ok, this deontologist) distinguish between torture and other kinds of pain just because torture violates a side-constraint.

I don't feel like that answer is very clear, possibly because we've reached a point where my thinking isn't very clear. Let me try it from a different example. In the context of your first comment -- I guess I just don't know what it would mean for a deontologist to say "torture is better." I know what "tortuous pain is better" means, but "torture is better" sounds to me like a claim about more than just states of affairs, but also a claim about actions.

(Good to have you commenting on this post, by the way. I'm rather fond of your blog.)

Expand full comment

Rolf: in general, I prefer to stay away from declaring an alignment to a specific broad position in normative ethics, for all the major ones are subject to worrying objections and counterexamples. I happen to think the objections and counterexamples are more troubling with respect to utilitarianism than the others, but I'm not completely happy with any position on offer.

But if I were forced to pick, a good first pass might be something like Kant's categorical imperative or a similar deontological position that places priority (lexical, even) on not offending the autonomy and dignity of human beings and on universalization (with shades of Habermas's discourse ethics and the recognition-based ideas expressed by a variety of continental philosophers -- my favorite in Simone de Beauvoir's The Ethics of Ambiguity).

It's worthwhile to think about how a Kantian would evaluate the torture vs. dust specks case. Of course, the Kantian would say they're both wrong -- in each case, one is exercising coercion on other people such that they can't "contain within [themselves] the end of the action," but if forced to choose, I think one could easily pick the specks over the torture. First note that deontological ethics need not be aggregative. We don't have to say that violating the autonomy and dignity of 2 people is worse than one, or that we can attach some kind of scalar to the amount of injury we do to someone's autonomy and dignity such that we can sum those harms across multiple people, etc. Rather, I think that a Kantian would just say that the act of torturing is worse than the act of dust-specking regardless of the total harm, because it is a bigger outrage to the dignity of a moral agent, is a bigger disruption to the victim's carrying out of his own life, etc.

Expand full comment

Paul - sure, there are some cases involving side-constraints that trump evaluations, but not all possible cases are like that. Imagine, for example, that the following three conditions hold: (i) someone is about to trip over themselves in such an awkward way that they will feel torturous pain; (ii) a dusty wind on a highly-populated planet is about to deposit dust-specks into zillions of eyes; and (iii) you have the power to prevent exactly one of these unfortunate events.

P.S. Besides, quite apart from considerations of action-guidance, it's just plain interesting to know what states of affairs are better or worse. It also directly guides rational preferences as to how we should prefer the world to be (independently of the question whether we should act so as to bring it about). And one could imagine it relevant to determining whether one is living in the best possible world, and hence whether a perfect God could plausibly have created it, etc. Evaluations are theoretically important for all sorts of reasons.

Expand full comment

Paul, do you have an specific alternative philosophy that you would actually advocate? It's already been established that it's possible to construct insane moral philosophies (such as "everything I feel like doing is morally right") that are non-utilitarian.

Expand full comment

Richard: I'm not sure that's quite right. I take a chief distinction between deontology (let's not complicate things by bringing in virtue ethics, whatever it was that Bernard Williams thought he was defending, etc.) and utilitarianism to be the presence of side-constraints, per Nozick, Shelly Kagan, etc. If there's a side constraint against torture, what reason is there to care whether the world where there is torture is better or worse than the world with the specks? Our evaluation of the state of affairs might be one piece of information we can use in determining whether a side constraint applies, sure, but for the notion of a side constraint to be meaningful, we must be able to say in at least some cases that X is wrong regardless of our evaluation of states of affairs, and thus that the evaluation is irrelevant.

Expand full comment

anonymous, no one was claiming that "pain can just be summed linearly". Those who agreed with Eliezer were claiming that a sort of "archimedean principle" holds for pains -- that given any two bad things, *some* number of repetitions of the smaller will outweigh the larger -- but that's a far smaller claim than "pain can just be summed linearly". Unknown has given a schematic version of an argument for that weaker claim in this thread; the number 3^^^3 is so inconceivably enormous, of course, that one can proceed with far bigger increases in the number of people and far smaller decreases in the amount of badness.

That argument depends on another assumption, namely that there's some kind of continuum between dust specks and years of torture. That seems plausible to me, not least because it seems easy to construct what seem like a set of quite closely spaced points between dust specks and torture, but of course it wouldn't be true if torture were really a kind of "higher-order suffering" as you suggest.

Expand full comment