Unknown: Quite independantly of your point, it seems to me you have a very peculiar notion of "large".regards, frankExpand full comment
 N does not need to be particularly large, because the number of possible brain states a human being can have is not particularly large.In any case, if 3^^^3 is too small, we can always choose Busy Beaver (3^^^3) instead, compared with which 3^^^3 is very, very, very close to zero.Expand full comment
 To Michael Vassar's point re: non-intuitive scales: Does not the scale cut both ways? Even granting:(i) each individual person's pain/disutility function P(x) is continuous for all states x between x1 [= dust speck] and x2 [= 50 years of torture], and(ii) the cumulative disutility is linearly additive across any number of persons N,it is not clear that N*P(x1) > P(x2) for some number N = 3^^^3 or googol or some other large number that exceeds a normal person's ability to form meaningful comparisons.There seems to be an assumption that the ratio R = P(x2)/P(x1) < N. Why? It is not at all obvious to me that this is the case. Certainly R is large; perhaps so large as to appeal to the use of Knuth's arrow notation or chained arrow notation or some other means of describing unconventionally large numbers; perhaps not. Some have appealed to the observation that at states near x1 and x2, you can make small enough changes to the states such that you can form reasonable judgments as to the size of P(x1) versus M*P(x1+delta). That still leaves us with the question of how many deltas fall between x1 and x2. When you're dealing with two unknown values, determining that one is greater than the other on the basis that the former is greater than any number you've previously conceived of seems silly. Perhaps even a manifestation of a cognitive bias that all unknown values fall within a range of previously conceived-of scales.Arguments like Unknown's merely illustrate (granting the assumptions above) is that there is some number N for which N > R provided P(x2) is finite and P(x1) is greater than zero, which is trivial given the assumptions.Expand full comment
 Paul, yes, my argument only argues for simplicity, not which form.Expand full comment
 Richard: You're also assuming there's some independent metric of value. Or at least, you seem to be. What if I say that torture is a better state of affairs to you, but dust specks are a better state of affairs to me? Or more powerfully, that I prefer dust specks if the tortured person is someone I care about, and torture otherwise? I can guarantee you that if the choice was between my best friend being tortured and 3^^^3 people I don't know getting the dust specks, I'd prefer option 2. But I don't think this has any particularly interesting results. The interesting questions don't come in until we start asking about agency and moral strictures.Expand full comment
 Correction: considering the two states of affairs, "x persons suffer pain z" and "y persons suffer pain z+e", the second will be preferable, if x>y by a sufficiently large amount, and if e is made small enough.(In other words, not, y>x.)Expand full comment
 Three points: First, utilitarianism is irrelevant, as Richard points out. I was myself thinking of the torture as inflicted by unintelligent robots or machines, and not by a personal agency. Even if it is a personal agency, as long as it isn't me, which of the two states of affairs is preferable can make a difference to my action, even if I think that torture is always wrong. (This will be explained below, in response to the objection that preference for the dust specks is in blatant contradiction to people's actions.)Second, in response to Michael's claim that we don't have intuitions about a googol of something: we don't need them. The intuition is that considering the two states of affairs, "x persons suffer pain z" and "y persons suffer pain z+e", the second will be preferable, if y>x by a sufficiently large amount, and if e is made small enough. In other words, it is a general intuition that will have consequences for things involving a googol, but it doesn't need to involve a direct intuition about a googol.Third, it is not true that people do not act on a preference for the dust specks over the torture, in terms of states of affairs. They do. They simply don't act on this in terms of actions. In this way they prefer "to allow the dust specks" rather than "to inflict torture on someone".People do prefer the state of affairs where very great harms come to a small number of people rather than states where very small harms come to a very large number of people. For example: suppose everyone's taxes are raised by \$10. In this way the US government can raise several billion dollars. Surely with this money it can prevent a few more murders, namely by acting in such a way that the murder rate decreases at least slightly. Do you prefer that we allow the murders that we could have prevented, or that we raise taxes on everyone by \$10? People prefer to allow the murders. Notice, however, that no one prefers to be a murderer rather than to pay \$10, or even to commit a murder rather than raising taxes. People act like deontologists (whether they are philosophically or not). So they won't choose to perform the harmful action themselves, whatever the consequences. This explains James Miller's point about the assassinations versus the bombing campaign; the assassinations are seen as murders, but not the collateral deaths in the bombing campaign. But at the same time, it shows that people do have a preference for the few concentrated harms, considered as states of affairs, and they act on this preference (for example by being unwilling to pay more taxes.)Expand full comment
 Rolf: I'm starting to think that we're less far apart than I initially thought. I disagree with nothing you said in the last comment: nothing there is inconsistent with my two main objections to the "obvious" correctness of the torture choice, viz. a) "a deontologist doesn't even have to play that game," and b) "it's not that easy to aggregate utility across people." I confess, I have some sympathy to the lexical ordering of outcomes too, but I think Michael's point has convinced me to the contrary.So while it still seems true that only a utilitarian (modulo aggregation issues) is forced to make the particular choice presented by Eliezer's example, your points are well-taken.Expand full comment
 And that's the difference between torturous pain and torturePaul, as a side note, if you re-read the comments (for example, this one) I think most of the people who've been replying in the past month are advocating a lexical ordering of outcomes based on their intuitions (which, as Michael Vassar pointed out, fail to take into account the fact that our intuition doesn't understand large numbers, and as James Miller pointed out is in blatant contradiction to their actual actions). Like Richard, I agree that this "unwillingness to do math" phenomenon is somewhat orthogonal to utilitarian vs. deontological arguments. You deontologists still need to contrast outcomes from time to time, and we utilitarians still sometimes get irrationally stubborn and refuse to synchronize our mathematical results with our axioms.Expand full comment
 Uh, for "example" in that last paragraph, read "angle." Obviously my caffeine is wearing off.Expand full comment