Knowing your argumentative limitations, OR “one [rationalist’s] modus ponens is another’s modus tollens.”

Followup to: Who Told You Moral Questions Would be Easy?Response to: Circular Altruism

At the most basic level (which is all we need for present purposes), an argument is nothing but a chain of dependence between two or more propositions.  We say something about the truth value of the set of propositions {P1…Pn}, and we assert that there’s something about {P1…Pn} such that if we’re right about the truth values of that set, we ought to believe something about the truth value of the set {Q1…Qn}. 

If we have that understanding of what it means to make an argument, then we can see that an argument doesn’t necessarily have any connection to the universe outside itself.  The utterance "1. all bleems are quathes, 2. the youiine is a bleem, 3. therefore, the youiine is a quathe" is a perfectly logically valid utterance, but it doesn’t refer to anything in the world — it doesn’t require us to change any beliefs.  The meaning of any argument is conditional on our extra-argument beliefs about the world.

One important use of this principle is reflected in the oft-quoted line "one man’s modus ponens in another man’s modus tollens."  Modus ponens is a classical form of argument: 1. A–>B.  2.  A. 3.  .: B.  Modus tollens is this: 1.  A–>B.  2. ¬B.  3. .: ¬A.  Both are perfectly valid forms of argument!  (For those who aren’t familiar with the standard notation, the horizontal line is meant to indicate negation.)  Unless you have some particular reason outside the argument to believe either A or B, you don’t know whether the claim A–>B means that B is true, or that A isn’t true! 

Why am I elucidating all this basic logic, which almost everyone reading this blog doubtless knows?  It’s a rhetorical tactic: I’m trying to make it salient, to bring it to the top of the cognitive stack, so that my next claim is more compelling.

And that claim is as follows:

Eliezer’s posts about the specks and the torture [1] [2], and the googleplex of people being tortured for a nanosecond, and so on, and so forth, tell you nothing about the truth of your intuitions.

Argument behind the fold…

At most, at most!, Eliezer’s arguments establish an inconsistency between two propositions.  Proposition 1: "utilitarianism is true." Proposition 2: "your intuitions about putting dust specks in people’s eyes, sacred values, etc., to the extent they recommend inflicting a small harm on lots of people rather than a lot of harm on one person, when that the aggregate pain from the first is higher than the aggregate pain from the second, are true."  As I’ve noted before, I don’t think Eliezer has even established that.  (The short version: utilitarianism is a lot more complicated than that, it ain’t easy to figure out how to aggregate harms, it ain’t easy to map those harms onto hedonic states like pleasure and pain, etc.) 

But let’s give Eliezer that one, arguendo.  Suppose his argument has established the inconsistency.  In symbols, where P = utilitarianism, and Q = your intuitions about dust specks etc., Eliezer has established ¬P∨¬Q.  (Not P, or not Q.)  It doesn’t establish ¬Q!  Unless there’s more exogenous reason to believe P than there is to believe Q, Eliezer’s argument shouldn’t be any more likely to cause us to disbelieve P than to disbelieve Q.  This is the step that should make your heart sing, now that I’ve primed you with the review of basic logic above.

Now let’s take the next step.  Why should there be more exogenous reason to believe P than to believe Q?  Why might one want to believe that utilitarianism is true? 

This post is already far too long to go over the abstract reasons why one might accept utilitarianism.  But let me make the claim, which you might find plausible, that many of those reasons come down to intuitions.  Those intuitions might be about specific cases which lead to inductive generalizations about rules ("I think it’s better to kill one person than to kill five, and better to torture for a week than torture for a year, therefore, it must be best to maximize pleasure over pain!"), or intuitions directly about the rules ("well, obviously, it’s best to maximize pleasure over pain!").  Regardless, intuitions they be.

And now let’s subjectivize things a little further.  I’ll bet that the vast majority of the people reading this post, people who hold utilitarian beliefs, came to those utilitarian beliefs largely as a result of articulating their moral intuitions, or reading an argument about normative ethics that spoke to their moral intuitions.  Eliezer’s own case is a perfect example: he has expressed his utilitarian beliefs as being a direct consequence of his seemingly intuitive choices

And now the final step.  You get your moral intuitions about the dust specks case from wherever it is that your intuitions come from.  You get your utilitarianism from wherever it is that your intuitions come from.   They’re on equal footing — you have no more reason to believe your utilitarian intuitions than you have to believe your dust speck intuitions!  Therefore, by the claims above, Eliezer’s argument shouldn’t cause you to reject your dust specks intuition.

A summary:
1.  An argument establishing that two propositions are inconsistent doesn’t tell you which of those propositions you should reject, unless you have more reason outside the argument to accept one or the other. 
2.  For any two propositions P and Q, if you accept P for only the same reasons you accept Q, you don’t have more reason to accept P than Q.
3.  Your reasons to believe dust specks are better than torture are identical to your reasons to believe utilitarianism is true.
4.  Therefore, an argument (Eliezer’s) establishing that dust specks>torture is inconsistent with utilitarianism doesn’t give you any reason to reject dust specks>torture.

Q.E.D. 

(A couple objections to this argument: 1) "But what if my intuitions about utilitarianism come from many, many cases, and I only have renegade non-utilitarian intuitions about a few cases — doesn’t that mean I should believe my utilitarian intuitions more strongly?"  Answer:  Sure, if and only if you think that the strength of intuitions can be summed that way, and it’s not obvious that’s true.  Also, I can come up with many more cases than just the dust specks where your intuitions likely get non-utilitarian outcomes.  2) I was recently handed a paper where an undergrad argued that the intuitions of utilitarians tend to [always, even] match the results of utilitarian calculations [should she read this post, I invite her to defend that claim in the comments].  If true, that would cause problems… but does anyone actually believe it?) 

This all connects back quite strongly to the point of this blog.  Taking an argument of the form ¬P∨¬Q and concluding, on that basis alone, ¬Q is an error in reasoning, and it’s one that strongly resembles a form of overconfidence — or perhaps expecting short inferential distances

That’s where the real fierceness lies.  There’s the naked sword.  There’s the solar plasma: in recognizing the limitations of your arguments, the point where the road — or "The Way" — stops. 

 

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Timothy Scriven

    Actually I read Eliezer’s post as a good argument against utilitarianism, a good illustration of the principle if there ever was one! I’ve never really bought utilitarianism and Eliezer’s post seemed like another thought experiment to add to my already long list of reasons why utilitarianism is morally counter intuitive.

  • Unknown

    In the dust speck discussion, I never saw Eliezer say “utilitarianism is true” or even “pain should be minimized.” In other words, Paul, you are simply wrong to posit that these are the intuitions that the argument depends on.

    I think I made the nature of the argument clear myself. The intuitions in question are that a slightly greater pain, found in only one person, is preferable to a slightly less pain, found in a very large number of persons, and the intuition (one that can be, if necessary, based on additional “intuitions” i.e. experience) that pain comes in slight increments.

    I haven’t seen anyone explicitly deny either of these things, and in order for your point to be valid, someone would have to deny at least one of them. I grant that either way, we must depend on an intuition. But is the dust speck intuition, a highly complicated claim, more obvious than the above two statements?

    In any case, your discussion of utilitarianism is quite beside the point, since that wasn’t part of the argument.

  • http://www.frankhirsch.net Frank Hirsch

    Unknown:
    Granted, he didn’t write that he subscribed to any form of Utilitarianism. But if we don’t take that for granted, then what’s the argument supposed to be about? If I’d be asked how many stars at most I would rather have loosing one helium atom each compared to a single star loosing a billion hydrogen atoms, I’d say I’d have a hard time trying to care less. More probably still, I wouldn’t care enough to answer at all.

    I think Paul’s argument establishes that in the end everything Eliezer really proves is that we have contradictive intuitions about the topic. Which is not even surprising, as large numbers are a rather reliable instruments to defeat our intuitions. I suppose we’re just not built for them.

    regards, frank

  • http://hanson.gmu.edu Robin Hanson

    We have intuitions about many many related cases, so I think we have no choice but to weigh them somehow in producing our net conclusions. And as I argue here, the more error we expect in those intuitions, the simpler our best estimates will be. So your best estimates are something like Utilitarianism if you think error rates are very high.

  • tcpkac

    Unknown, we can go round in circles like this till the cows come home, but just to clarify one aspect:
    Whatever intuitions you have about ‘slightly more’ or ‘slightly less’ pain summed across more or less individuals (and I can share those intuitions) do not necessarily hold good, IMO, when you push the sliders to unimaginable pain at one end of the scale and negligeable pain at the other end.
    If you want to do math over the subjective experience of physical pain, my intuition suggests a hyperbolic function would be more appropriate, that is, the ethical weight of incremental physical pain tends rapidly and asymptotically to zero at the lower end as it tends to infinity at the higher end.
    My intuition also says there are cutoff points at both ends of the scale : ‘negligeable’ at one end, and ‘dead’ at the other.
    My intuition finally says that this approach to the ethical question is dead wrong.
    There are many possible approaches to the ethics of the question : the mathematical, as above, the truly subjective (ask each of the 3^^^3 people what they choose), which is what I prefer, the evolutionary, the neurological-subjective, the materialist-economic, come to mind.
    All of them would depend on definitions of ‘the greater good’ which are subjective, and all would, a fortiori, depend on rules which would also be subjective.
    Paul Gowder’s post reminds us that to demonstrate that one set of subjectivities is different from another doesn’t mean it’s right.

  • Unknown

    “Whatever intuitions you have about ‘slightly more’ or ‘slightly less’ pain summed across more or less individuals (and I can share those intuitions) do not necessarily hold good, IMO, when you push the sliders to unimaginable pain at one end of the scale and negligeable pain at the other end.”

    This may be your opinion, but it is wrong. Let me lay out a whole series of intuitions to illustrate this:

    Intuition: 1 person suffers pain 1.0 is preferable to 10 persons suffer pain 0.9

    Intuition: 10 persons suffer pain 0.9 is preferable to 100 persons suffer pain 0.8

    Intuition: 100 persons suffer pain 0.8 is preferable to 1,000 persons suffer pain 0.7

    Intuition: 1,000 persons suffer pain 0.7 is preferable to 10,000 persons suffer pain 0.6

    Intuition: 10,000 persons suffer pain 0.6 is preferable to 100,000 persons suffer pain 0.5

    Intuition: 100,000 persons suffer pain 0.5 is preferable to 1,000,000 persons suffer pain 0.4

    Intuition: 1,000,000 persons suffer pain 0.4 is preferable to 10,000,000 persons suffer pain 0.3

    Intuition: 10,000,000 persons suffer pain 0.3 is preferable to 100,000,000 persons suffer pain 0.2

    Intuition: 100,000,000 persons suffer pain 0.2 is preferable to 1,000,000,000 persons suffer pain 0.1

    Conclusion: 1 persons suffer pains 1.0 is preferable to 1,000,000,000 persons suffer pain 0.1.

    I.E. 1 persons tortured for 50 years is preferable to XYZ dust specks.

  • tcpkac

    Robin Hanson, I read your healthcare paper with interest. If many moral intuitions are a good way off the ‘curve’ represented by Utilitarianism, then,
    even discarding some of them as suspect, and
    even demonstrating that Utilitarianism is simpler than alternative ‘curves’ (ethical theories), which I assume for the sake of argument is demonstrated somewhere,
    you’ld still have to demonstrate that the Utilitarianism curve was reasonably close to a ‘working majority’ of moral intuitions to be able to claim it as a good fit…..

  • http://rolfnelson.blogspot.com Rolf Nelson

    Paul, unfortunately there’s no valid logical argument from “I think, therefore I am” to “it’s wrong for Bob to murder 10 random people for no gain”. However, most of us choose to have some type of moral goals. I personally choose the general increase in human welfare as one of those goals. I can’t really stop you from choosing “rationalizing whatever moral intuition (whims) I have today” as your goal, I can only hope that if I frame the question in that way, you’ll at some point realize that’s not what you *really* want to do with your life when you sit down and think about it.

    Yes, the definition of what human welfare entails is, perhaps wholly, based on intuition and unprovable moral axioms. However, deciding how to then maximize welfare, if we knew what welfare is, is the domain of reason rather than arbitrary intuition. If we get a counter-intuitive result, we can revisit the axioms and realize that they’re not the axioms we actually want, but to say that we should abandon consistency and cheerfully accept inconsistent conclusions is, by definition, madness.

    Non-utilitarianism usually means that if you’re given a choice between A and B, and told C is not an option, you’ll choose A. But if you’re given a choice between A, B, and C, you’ll choose B. If you want to be a more rational person, then you should realize that there’s something sub-optimal about the way you’re choosing to live your life.

  • michael vassar

    A key point here is that we don’t actually have any intuitions about a google or more of anything because we have no intuitive sense of a google. You being able to put down symbols on paper that could in principle have meaning to someone doesn’t mean those symbols have meaning to you. Since we don’t have any real intuitions about what a google even means, if we ever have to make decisions involving a google of something we need to build a general theory that we can do math with and then use that general theory to find out what happens when we have a google of something.

  • http://hanson.gmu.edu Robin Hanson

    tcpkac, yes of course.

    Michael, good point.

  • Paul Gowder

    Robin: yes, I think that point in your paper is a challenge to the argument I present here. I’ve been mulling it over for a few days, and I guess my best answer right now is that there are other simple estimates (basically, the other schools of normative ethics, as well as things like the golden rule) that are roughly on the same level as utilitarianism, and it doesn’t seem like your argument gives a way to pick between them.

    Rolf: how does non-utilitarianism entail violating an independence axiom?

    Michael: That’s a very interesting point. I’ll have to think about it some more before hazarding a reply.

  • Caledonian

    Our ethical impulses were evolved to deal with particular sorts of situations. At no point were they ever required to deal with concepts or amounts anywhere close to a googol.

    Asking what our ethical intuitions say about such cases is wrong. Our ethical systems don’t say anything about those cases. They are completely beyond their scope, and outside of our ability to process and comprehend them.

    We can manipulate symbols to talk about a googol. That does not mean that we can comprehend that many of anything – and in fact, we cannot. Our understanding of such things is limited only to explicit language processing.

  • anonymous

    One big problem I had with the whole dust specks thing is the assumption that pain can just be summed linearly, so that one man’s lifetime of torture is the same as many people’s minor pains. It seems to me that the pain of losing your entire life to torture is an entirely different order of pain. It’s not the same as experiencing the minor inconvenience over and over again… higher order pain emerges: depression over losing your entire life etc. I can’t help but think of Alephs from set theory: no matter how many people suffer from one speck, it can never be as bad as one person suffering for a lifetime, because his higher order suffering outweighs even an infinite number of isolated instances of dust-induced nuisance.

  • http://rolfnelson.blogspot.com Rolf Nelson

    how does non-utilitarianism entail violating an independence axiom?

    Paul, In the absence of probability, if you can construct a partial ordering of all the complete states of the universe, you can easily map that onto an inferred utility function. You’d have to use higher math like hyperreals to model odd things like “lexical ordering”, but (1) this is doable if you really wanted to, and (2) I agree with James Miller that nobody actually has lexical ordering of their preferences.

    If you bring in the concept of probability, then you may have to bring in other constraints besides independence to avoid being Dutch-booked. However, this is irrelevent to the dust-particle example, which does not directly involve uncertainty.

  • Doug S.
  • http://philosophyetc.net Richard

    Technical quibble (echoing ‘Unknown’): utilitarianism is usually considered to be a theory about right and wrong actions. What you’re talking about is evaluating states of affairs as better or worse. It’s entirely open for someone to think that (i) torture is better, but (ii) it would be impermissible to act on this (say by torturing one person to prevent zillions of dust-specks). So these are separate issues.

  • Paul Gowder

    Richard and unknown: the discussion seems to become silly if we’re not trying to say something about utilitarianism. Who cares which state of affairs we evaluate as better or worse apart from questions about what practical reason ought to do if faced with the choice?

  • http://philosophyetc.net Richard

    Recall, a non-consequentialist is just someone who thinks that the value of states of affairs is not the only relevant consideration in deciding how to act. Nobody sensible would deny that it is a relevant consideration, though!

    Compare R.M. Hare: “It is worth saying right at the beginning that this is not a problem peculiarly for utilitarians… The fact, if it is one, that there are other independent virtues and duties as well [as beneficence] makes no difference to this requirement. Only a theory which allowed no place at all to beneficence… could escape this demand. Anybody, therefore, who is tempted to bring up this objection against utilitarians should ask himself whether he is himself attracted by a theory which leaves out such considerations entirely.”

  • http://www.mccaughan.org.uk/g/ g

    anonymous, no one was claiming that “pain can just be summed linearly”. Those who agreed with Eliezer were claiming that a sort of “archimedean principle” holds for pains — that given any two bad things, *some* number of repetitions of the smaller will outweigh the larger — but that’s a far smaller claim than “pain can just be summed linearly”. Unknown has given a schematic version of an argument for that weaker claim in this thread; the number 3^^^3 is so inconceivably enormous, of course, that one can proceed with far bigger increases in the number of people and far smaller decreases in the amount of badness.

    That argument depends on another assumption, namely that there’s some kind of continuum between dust specks and years of torture. That seems plausible to me, not least because it seems easy to construct what seem like a set of quite closely spaced points between dust specks and torture, but of course it wouldn’t be true if torture were really a kind of “higher-order suffering” as you suggest.

  • Paul Gowder

    Richard: I’m not sure that’s quite right. I take a chief distinction between deontology (let’s not complicate things by bringing in virtue ethics, whatever it was that Bernard Williams thought he was defending, etc.) and utilitarianism to be the presence of side-constraints, per Nozick, Shelly Kagan, etc. If there’s a side constraint against torture, what reason is there to care whether the world where there is torture is better or worse than the world with the specks? Our evaluation of the state of affairs might be one piece of information we can use in determining whether a side constraint applies, sure, but for the notion of a side constraint to be meaningful, we must be able to say in at least some cases that X is wrong regardless of our evaluation of states of affairs, and thus that the evaluation is irrelevant.

  • http://rolfnelson.blogspot.com Rolf Nelson

    Paul, do you have an specific alternative philosophy that you would actually advocate? It’s already been established that it’s possible to construct insane moral philosophies (such as “everything I feel like doing is morally right”) that are non-utilitarian.

  • http://philosophyetc.net Richard

    Paul – sure, there are some cases involving side-constraints that trump evaluations, but not all possible cases are like that. Imagine, for example, that the following three conditions hold: (i) someone is about to trip over themselves in such an awkward way that they will feel torturous pain; (ii) a dusty wind on a highly-populated planet is about to deposit dust-specks into zillions of eyes; and (iii) you have the power to prevent exactly one of these unfortunate events.

    P.S. Besides, quite apart from considerations of action-guidance, it’s just plain interesting to know what states of affairs are better or worse. It also directly guides rational preferences as to how we should prefer the world to be (independently of the question whether we should act so as to bring it about). And one could imagine it relevant to determining whether one is living in the best possible world, and hence whether a perfect God could plausibly have created it, etc. Evaluations are theoretically important for all sorts of reasons.

  • Paul Gowder

    Rolf: in general, I prefer to stay away from declaring an alignment to a specific broad position in normative ethics, for all the major ones are subject to worrying objections and counterexamples. I happen to think the objections and counterexamples are more troubling with respect to utilitarianism than the others, but I’m not completely happy with any position on offer.

    But if I were forced to pick, a good first pass might be something like Kant’s categorical imperative or a similar deontological position that places priority (lexical, even) on not offending the autonomy and dignity of human beings and on universalization (with shades of Habermas’s discourse ethics and the recognition-based ideas expressed by a variety of continental philosophers — my favorite in Simone de Beauvoir’s The Ethics of Ambiguity).

    It’s worthwhile to think about how a Kantian would evaluate the torture vs. dust specks case. Of course, the Kantian would say they’re both wrong — in each case, one is exercising coercion on other people such that they can’t “contain within [themselves] the end of the action,” but if forced to choose, I think one could easily pick the specks over the torture. First note that deontological ethics need not be aggregative. We don’t have to say that violating the autonomy and dignity of 2 people is worse than one, or that we can attach some kind of scalar to the amount of injury we do to someone’s autonomy and dignity such that we can sum those harms across multiple people, etc. Rather, I think that a Kantian would just say that the act of torturing is worse than the act of dust-specking regardless of the total harm, because it is a bigger outrage to the dignity of a moral agent, is a bigger disruption to the victim’s carrying out of his own life, etc.

  • Paul Gowder

    Richard: but I think your example misses the central feature of the original problem, which happens also to be the feature that I think makes this really a debate about the virtues of utilitarianism. And that’s the difference between torturous pain and torture. Torture entails the existence of a torturer in a way that torturous pain doesn’t. And so in your case, I think a deontologist (assuming arguendo that pain is aggregable across people, etc.) could reasonably choose to stop ii. But I don’t think a deontologist could make the same choice if i. were “you are about to torture someone,” or “someone is about to be tortured by Torquemada.” And I don’t think you can offer me a case where the agency behind torture doesn’t make the difference.

    I suppose I’m really ducking your point here, which is that sure, the evaluation of states of affairs is sometimes relevant to people other than utilitarians. But I don’t feel many compunctions about that move, because I think the original problem is one that utilitarians and deontologists have to disagree on, just because they’re utilitarians and deontologists and it’s about torturing people. Deontologists (ok, this deontologist) distinguish between torture and other kinds of pain just because torture violates a side-constraint.

    I don’t feel like that answer is very clear, possibly because we’ve reached a point where my thinking isn’t very clear. Let me try it from a different example. In the context of your first comment — I guess I just don’t know what it would mean for a deontologist to say “torture is better.” I know what “tortuous pain is better” means, but “torture is better” sounds to me like a claim about more than just states of affairs, but also a claim about actions.

    (Good to have you commenting on this post, by the way. I’m rather fond of your blog.)

  • Paul Gowder

    Uh, for “example” in that last paragraph, read “angle.” Obviously my caffeine is wearing off.

  • http://rolfnelson.blogspot.com Rolf Nelson

    And that’s the difference between torturous pain and torture

    Paul, as a side note, if you re-read the comments (for example, this one) I think most of the people who’ve been replying in the past month are advocating a lexical ordering of outcomes based on their intuitions (which, as Michael Vassar pointed out, fail to take into account the fact that our intuition doesn’t understand large numbers, and as James Miller pointed out is in blatant contradiction to their actual actions). Like Richard, I agree that this “unwillingness to do math” phenomenon is somewhat orthogonal to utilitarian vs. deontological arguments. You deontologists still need to contrast outcomes from time to time, and we utilitarians still sometimes get irrationally stubborn and refuse to synchronize our mathematical results with our axioms.

  • Paul Gowder

    Rolf: I’m starting to think that we’re less far apart than I initially thought. I disagree with nothing you said in the last comment: nothing there is inconsistent with my two main objections to the “obvious” correctness of the torture choice, viz. a) “a deontologist doesn’t even have to play that game,” and b) “it’s not that easy to aggregate utility across people.” I confess, I have some sympathy to the lexical ordering of outcomes too, but I think Michael’s point has convinced me to the contrary.

    So while it still seems true that only a utilitarian (modulo aggregation issues) is forced to make the particular choice presented by Eliezer’s example, your points are well-taken.

  • Unknown

    Three points: First, utilitarianism is irrelevant, as Richard points out. I was myself thinking of the torture as inflicted by unintelligent robots or machines, and not by a personal agency. Even if it is a personal agency, as long as it isn’t me, which of the two states of affairs is preferable can make a difference to my action, even if I think that torture is always wrong. (This will be explained below, in response to the objection that preference for the dust specks is in blatant contradiction to people’s actions.)

    Second, in response to Michael’s claim that we don’t have intuitions about a googol of something: we don’t need them. The intuition is that considering the two states of affairs, “x persons suffer pain z” and “y persons suffer pain z+e”, the second will be preferable, if y>x by a sufficiently large amount, and if e is made small enough. In other words, it is a general intuition that will have consequences for things involving a googol, but it doesn’t need to involve a direct intuition about a googol.

    Third, it is not true that people do not act on a preference for the dust specks over the torture, in terms of states of affairs. They do. They simply don’t act on this in terms of actions. In this way they prefer “to allow the dust specks” rather than “to inflict torture on someone”.

    People do prefer the state of affairs where very great harms come to a small number of people rather than states where very small harms come to a very large number of people. For example: suppose everyone’s taxes are raised by $10. In this way the US government can raise several billion dollars. Surely with this money it can prevent a few more murders, namely by acting in such a way that the murder rate decreases at least slightly. Do you prefer that we allow the murders that we could have prevented, or that we raise taxes on everyone by $10? People prefer to allow the murders. Notice, however, that no one prefers to be a murderer rather than to pay $10, or even to commit a murder rather than raising taxes. People act like deontologists (whether they are philosophically or not). So they won’t choose to perform the harmful action themselves, whatever the consequences. This explains James Miller’s point about the assassinations versus the bombing campaign; the assassinations are seen as murders, but not the collateral deaths in the bombing campaign. But at the same time, it shows that people do have a preference for the few concentrated harms, considered as states of affairs, and they act on this preference (for example by being unwilling to pay more taxes.)

  • Unknown

    Correction: considering the two states of affairs, “x persons suffer pain z” and “y persons suffer pain z+e”, the second will be preferable, if x>y by a sufficiently large amount, and if e is made small enough.

    (In other words, not, y>x.)

  • Jadagul

    Richard: You’re also assuming there’s some independent metric of value. Or at least, you seem to be. What if I say that torture is a better state of affairs to you, but dust specks are a better state of affairs to me? Or more powerfully, that I prefer dust specks if the tortured person is someone I care about, and torture otherwise? I can guarantee you that if the choice was between my best friend being tortured and 3^^^3 people I don’t know getting the dust specks, I’d prefer option 2. But I don’t think this has any particularly interesting results. The interesting questions don’t come in until we start asking about agency and moral strictures.

  • http://hanson.gmu.edu Robin Hanson

    Paul, yes, my argument only argues for simplicity, not which form.

  • Sandy

    To Michael Vassar’s point re: non-intuitive scales: Does not the scale cut both ways? Even granting:
    (i) each individual person’s pain/disutility function P(x) is continuous for all states x between x1 [= dust speck] and x2 [= 50 years of torture], and
    (ii) the cumulative disutility is linearly additive across any number of persons N,
    it is not clear that N*P(x1) > P(x2) for some number N = 3^^^3 or googol or some other large number that exceeds a normal person’s ability to form meaningful comparisons.

    There seems to be an assumption that the ratio R = P(x2)/P(x1) < N. Why? It is not at all obvious to me that this is the case. Certainly R is large; perhaps so large as to appeal to the use of Knuth's arrow notation or chained arrow notation or some other means of describing unconventionally large numbers; perhaps not. Some have appealed to the observation that at states near x1 and x2, you can make small enough changes to the states such that you can form reasonable judgments as to the size of P(x1) versus M*P(x1+delta). That still leaves us with the question of how many deltas fall between x1 and x2. When you're dealing with two unknown values, determining that one is greater than the other on the basis that the former is greater than any number you've previously conceived of seems silly. Perhaps even a manifestation of a cognitive bias that all unknown values fall within a range of previously conceived-of scales.

    Arguments like Unknown's merely illustrate (granting the assumptions above) is that there is some number N for which N > R provided P(x2) is finite and P(x1) is greater than zero, which is trivial given the assumptions.

  • Unknown

    N does not need to be particularly large, because the number of possible brain states a human being can have is not particularly large.

    In any case, if 3^^^3 is too small, we can always choose Busy Beaver (3^^^3) instead, compared with which 3^^^3 is very, very, very close to zero.

  • http://www.frankhirsch.net Frank Hirsch

    Unknown: Quite independantly of your point, it seems to me you have a very peculiar notion of “large”.

    regards, frank