No theory X in shining armour

A frequent topic on this blog is the likely trade-off between a higher population and a higher quality of life at some point in the future. Some people – often total utilitarians – are willing to accept a lower quality of life for our descendants if that means there can be more of them. Others – often average utilitarians – will accept a smaller population if it is required to improve quality of life for those who are left.

Both of these positions lead to unintuitive conclusions if taken to the extreme. On the one hand, total utilitarians would have to accept the ‘repugnant conclusion‘, that a very large number of individuals experiencing lives barely worth living, could be much better than a small number of people experiencing joyous lives. On the other hand, average utilitarians confront the ‘mere addition paradox'; adding another joyous person to the world would be undesirable so long as their life was a little less joyous than the average of those who already existed.

Derek Parfit, pioneer of these ethical dilemmas and author of the classic Reasons and Persons, strived to,

“develop a theory of beneficence – theory X he calls it – which is able to solve the Non-identity problem [1], which does not lead to the Repugnant Conclusion and which thus manages to block the Mere Addition Paradox, without facing other morally unacceptable conclusions. However, Parfit’s own conclusion was that he had not succeeded in developing such a theory.”

Such a ‘theory X’ would certainly be desirable. I am not keen to bite the bullet of either the ‘repugnant conclusion’ or ‘mere addition paradox’ if neither is required. Unfortunately, if like me, you were hoping that such a theory might be forthcoming, you can now give up waiting. I was recently surprised to learn that What should we do about future generations? Impossibility of Parfit’s Theory X by Yew-Kwang Ng (1989) demonstrated many years ago that theory X cannot exist.

To complete the proof, Yew-Kwang has to add a very reasonable principle of his own:

Non-Antiegalitarianism: If alternative B has the same set of individuals as in alternative A, with all individuals in B enjoying the same level of utility as each other, and with a higher total utility than A, then, other things being equal, alternative B must be regarded as better than alternative A.

Given that both average and total utility increase and inequality is reduced or unchanged, this principle can hardly be disputed. Avoiding the Mere Addition Paradox (or in Ng’s phrasing, using the Mere Addition Principle), and then applying Non-antiegalitarianism, the Repugnant Conclusion becomes an inevitable result:

Consider the following alternatives:

A: 1 billion individuals with an average utility of 1 billion utils.
A + : The same 1 billion individuals with exactly the same utility levels plus 1 billion trillion individuals each with 1 util (i.e., barely worth living).
E: The same individuals as in A + with a somewhat higher total utility but equally shared by all (i.e., each with, say, 1.01 utils).

Clearly, the Mere Addition Principle implies that A + is better than or at least no worse than A, and Non-Antiegalitarianism implies that E is better than A +. So E is better than or at least not worse than A. (Cf. Parfit, 1984, pp. 431-32.) Since a life with 1.01 utils (this positive figure can be made as small as we like by a suitable change in numbers in the earlier example) is still barely worth living, the necessity to say that E is better than or at least not worse than A must still be regarded as an instance of the Repugnant Conclusion.

We must therefore either reject Non-antiegalitarianism, or bite the bullet of the Mere Addition Paradox or Repugnant Conclusion. Non-antiegalitarianism seems impregnable. Biting the bullet on the Mere Addition Paradox would imply that 1 person with a utility of 1 could be more desirable than 1 million people with an average utility of 0.99, even if all of them were living highly worthwhile lives. That is also simply ridiculous in my view. The Repugnant Conclusion suggests that a large number of people with lives just worth living can be better than a smaller number with very good lives. But the values and quantities are hard to grasp. While it is unpleasant to imagine myself living in a world full of people living only barely worthwhile lives, is that in itself a good reason to reject it? Ng argues not:

“…why do most people find [the conclusion] repugnant? This, I believe, could be due either to an inability to understand the implication of large numbers or to misplaced partiality. Consider the following alternative worlds:

A: 1 single utility monster with 100 billion utils. [for a total of 100 billion utils]
B: 1 billion individuals each with 200 utils. [for a total of 200 billion utils]
C: 1 billion billion individuals each with 0.001 utils. [for a total of 1^16 utils]

Intuitively, most people prefer B to C and also prefer B to A. This is so because B looks similar to our present world and we are not pre- pared to sacrifice a decrease in average utility from 200 to 0.001 even if the increase in population size more than compensates (in terms of total utility) to this reduction. Also, we are not prepared to sacrifice numbers from 1 billion to 1, even if the gain in average utility overbalances this. But this is taking a partial view from our standpoint. From an impartial viewpoint or from the viewpoint of comparing two hypothetical, mutually exclusive alternatives, if B is better than A, then C is much better than B.”

A is threatening, as I am not a part of it, and C is threatening because so long as I can only be one person I will get a lot less utility from my existence. I am perfectly able to see why B is better than A, because except for the 100 billion utils, the comparison is around figures I am comfortable with. The fact that there are more people living good lives, rather than one person living a great life doesn’t raise any alarms. But if I accept that, why not also accept the move from B to C? I don’t fully comprehend what a billion billion people or 0.01 utils are really like, but by extension it seems desirable. I imagine if I were already a part of C, moving from B to C would seem just fine.

Aliens visiting Earth might well see our lives as barely worth living, at least relative to theirs. In light of that, should we necessarily prefer to replace all of humanity with a single individual living a better life than anyone has so far? I think not.

Of course I would not want to personally move from B to C because I would be worse off. But that selfish desire is not a reason against acting in a way that improve the total welfare of others I don’t personally know.

If I have to accept something, I accept the repugnant conclusion and aim to maximise total welfare. A lot of little bits of good can indeed add up to a lot of good, even if it’s hard to picture!

[1] The non-identity problem is that most important choices that affect the future, don’t just affect that quality of life of people in the future, but also ‘who’ exists, by changing the precise circumstances of people’s conception. If, to avoid having to worry about impacts on who exists, you decided to only concern yourself with how your choices affected the welfare of people who would live in all the future scenarios you were contemplating, then in many cases you would not care about the future at all because there would be no identical people featuring in all of those scenarios.

Update: Yew-Kwang emails to add, “You understate the case against average utilitarianism … one could go further than this well-known mere addition paradox. In an original population of 100 million with AU = 100, the addition of another 100m with AU = 80 and with the pre-existing people AU increases to 110, this change that makes all existing and new individuals happier is still opposed by average utilitarianism, since the AU decreases from 100 to 95. Thus, average utilitarianism is much more unacceptable, repugnant than you thought, and than the repugnant conclusion.”

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • Richard Chappell

    Biting the bullet on the Mere Addition Paradox would imply that 1 person with a utility of 1 could be more desirable than 1 million people with an average utility of 0.99, even if all of them were living highly worthwhile lives

    This doesn’t follow. Rejecting (“biting the bullet” on) mere addition is compatible with strict average utilitarianism, but it doesn’t entail it.

    It’s also not obvious that “if B is better than A, then C is much better than B”.  That assumes that our only basis for preferring B to A is total utilitarianism.  But there are value holist [pdf] views that could coherently hold B to be the best of the three worlds.

    • TheOrphanWilde

      The value holist views explicitly reject the egalitarian principle.
      There also seems to me to be some utility sleight of hand taking place with its discussion of “diversity.”  It’s effectively positing a second utility set, which leads to its own set of repugnant conclusions. (Although it makes the claim that the conclusions are not actually repugnant.)

  • Chris

     
    From an impartial viewpoint or from the viewpoint of comparing two hypothetical, mutually exclusive alternatives, if B is better than A, then C is much better than B.”

    But why in God’s name would you want to take an “impartial” viewpoint on it? Sure, if you’re for some bizarre reason in the situation of having to choose between the two situations conjured up out of nothing, then maybe there are reasons to prefer C. But we won’t be in that situation, we’ll be in situation B deciding if we want to go to situation C. There’s no good reason, despite the attempts of people like you and Robin to argue otherwise, to consider the welfare of beings who don’t exist as equal to those who do. It’s not our failure to “bite the bullet” that causes people to rebel from your Malthusian visions, it’s our unwillingness to accept radically weird premises.

  • anon

    I find it very difficult to not substitute something like “wealth” or “total consumption” for utility when reading something like this. I think that may be the point. 
     
    To me the whole discussion is an abuse of the concept of utility.

    Which is better, B better than C? 
    Who would win in a fight, Superman or Dr. Manhattan? 

    Are these meaningfully different questions? Both seem to have about an equal amount to do with reality.  

  • Sister Y

    David Benatar devotes a chapter in Better Never to have Been to explaining why antinatalism (ideal population size = 0) is compatible with theory X.

  • http://bloodyshovel.wordpress.com/ spandrell

    All of this pointless abstraction just to say that you endorse amnesty for illegal immigrants in the US? 

    • skyhook

      Robert Wiblin is Australian. 

  • Mark M

    When speaking in purely mathematical terms, non-antiegalitarianism makes sense.  By definition, higher Utils is better, and non-antiegalitarianism increases both average and total Utils.  Yay!

    But is Alternative B even possible?  I think a natural consequence of equalizing Utils for CEOs and garbage collectors is a decline in both total and average Utils.  We’re not talking about math any more – we’re talking about social engineering.  CEO’s and other elite professionals who are not rewarded for their efforts will not continue in those efforts.  (Ok, in many instances CEO is a bad example.) 

    So, yeah, if you COULD turn alternative A where worthiness is widely distributed into alternative B where worthiness is equal but higher than the average from A, it would be better.  But you can’t.

    Now, it’s possible that the “distribution of worth” in alternative A has a very high standard deviation or is otherwise skewed, and it may be possible to improve the total and average worth by redistributing the worth.  You just can’t go overboard by trying to equal it out.  There is probably a balance point – let’s call it “Theory X.”

    You also can’t add a zillion people whose lives are barely worth living to a billion people whose lives are very worthwhile and expect those billion to remain the same.  Those zillion malcontents will bring everyone down!  You have to watch your diminishing returns.

  • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

    To apply such reasoning, you’d have to think utilitarianism is somehow “true,” rather than (at best) a practical guide to social policy under certain limited circumstances. But what reason could there be to think utilitarianism (or any theory positing ethical principles) is true? When you consider that utilities are but quantified preferences of organisms, what is inherently better  (whatever that might mean) about the existence of more preferences to satisfy? My hunch is that the appeal of utilitarianism–the illusion that it is true or compelling–comes from the belief that consciousness (or “sentience”) is an extraordinary possession. Being sentient is like having a soul: it is a special, mystical essence of some higher life forms that inherently deserves respect, and the ultimate “good” relates to the states of that experience: happiness and unhappiness. The connotation remains, despite reformulations of utilitarianism based on a better analysis of motivation.

    If I’m right, utilitarians will find the denial of the existence of qualitative consciousness threatening. (See my “The supposedly hard problem of consciousness and the nonexistence of sense data: Is your dog a conscious being?” — http://tinyurl.com/c3zq8ht)

  • komponisto


    Of course I would not want to personally move from B to C because I would be worse off. But that selfish desire is not a reason against acting in a way that improve the total welfare of others I don’t personally know.

    What do you mean? Of course it is! “Desires” — or “preferences” — are exactly what “reasons for/against acting” are made of! “Welfare” is measured in utils, and utils describe the preferences of an agent.

    And in real life, the agent doing the deciding is not going to be some “impartial” ideal philosopher of perfect emptiness. It’s going to be a human (or a human-descended entity). Yes, maybe aliens would have no problem transporting us from B to C, but (that’s because) they’re not us. We humans care about ourselves; this should be neither surprising nor disturbing.  

    • B for Bandana

      But also in real life, it is a good idea to pretend to be an “impartial” ideal philosopher of perfect emptiness, because if you reveal you’re just a self-interested human, and even worse, are unashamed of the fact, people might trust you less.

      So why did you post that comment? Because you selflessly want people to know the Truth? How ideal-philosopher-like of you.

  • http://www.thepolemicalmedic.com/ Thrasymachus

    Along similar population ethics line, you might be interested in the work of Gustav Arrhenius. He shows that with a very modest set of axioms you have to accept a *really* repugnant conclusion where you replace (for example) 100 people at happiness 10 with 100 people at happiness -10 and N at happiness marginally greater than zero.

    http://people.su.se/~guarr/

    • GNZ

      It would be a bit repugnant under most sort of ethics if things work such that giving some people -ve utility is the utility maximising strategy…

      And yet this seems quite christian in a sense… I imagine a christlike figure taking the sins ofthe world upon him at -1 trillion utils and the rest of the world living a happy life. It seems we can have a positive opinion of such a scenario as long as we are not on the bad end of it.

    • jhertzli

      I’m reminded of the old joke about the business that sells at a loss but makes it up on volume.

  • V V

    More evidence that both total and average utilitarianism are insane

    • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

       The dilemma (coming from a searching inquiry by a very smart utilitarian, Parfit) seems—for anyone but a religiously convinced utilitarian—a reductio.

    • Pablo

      All population theories are “insane”, on that logic, since no theory satisfies all the adequacy conditions that we would regard as intuitively plausible.  See here for a more rigorous proof of this claim.

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

         Of course, all theories of ethics–not just population ethics–are crazy. Utilitarianism isn’t even especially bad in comparison.

      • Pablo

        You don’t seem to have appreciated the significance of this result.  It would be silly to say that all social choice theories are “crazy” because no theory can meet all of a number of intuitive requirements (as shown by Arrow).  It is, for analogous reasons, equally silly to condemn all ethical theories on the grounds that they all fail to satisfy a number of plausible conditions of adequacy.

      • roystgnr

        Why is it inherently silly to say that all social choice theories are crazy?  Because political stability demands a powerful Schelling point, and “The Objectively Ethical Right of Democracy” is at least a huge improvement over “The Divine Right of Kings”?  Granted, but surely it’s still worth occasionally noticing that collectivist axioms rapidly lead to mathematical contradictions?

        Despite our love of democracy, we mostly avoid the craziness of social choice theories by dividing up decisions into individual choices instead.  I’d say that’s an improvement in total utility.  (and since utility functions are only unique up to affine transformations, you can hardly disprove the claim that satisfying my preferences gives immensely more utilions than satisfying anyone elses)

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

         Pablo,
        Arrow’s theory does show social choice formulae are crazy if they’re understood as the way to compute the “will of the people.” Arrow’s theorem renders that concept absurd. Similarly, an analysis of moral realism can show that all moral theories are crazy (http://tinyurl.com/cxjqxo9), as they are taken as a way to compute “the good.”

      • V V

        I’d say they are insane because they contradict many of our moral intuitions.

  • newqueuelure

    I’ve been working with the fluctuation theorem lately and while reading this I realized that adding or subtracting people necessarily changes the future phase space of utility states and therefore population changes are technically non-equilibrium utilitarianism for which we have few tools to calculate utility.

    An example: adding a person always creates a potential new friend or lover for some existing person.

    Another example: adding people will eventually generate new “dissipative structures” (family, clan, tribe, nation, … ? sports team affiliations) people will derive utility from. We have no way of knowing whether or not those billions in the future will have subsistence utility levels or be super-happy mindless drones to some cause just with subsistence levels of resources. We consider them to have subsistence utility because we don’t know how to calculate the effects new structures that haven’t arisen.

    • http://twitter.com/psztorc Paul Sztorc

      As Jorge also says below, I think it is quite likely that we won’t need to worry about this problem. Though I love this blog and RH is obviously a genius, I’ve never bought the premise tradeoff…causality is of course unknown but in general average utility has increased with population size.

      Moreover as shown here, the ultilites of worlds A B C really cannot be compared…possibly we could force equivalent units by fully informing members of each world about all options in the others (new sports teams, etc)? When A finds out that he could have a relationship with someone it’s going to take an awful lot of something(?) to persuade him he’s orders of magnitude better in A than B.

  • http://www.facebook.com/people/Jorge-Emilio-Emrys-Landivar/37403083 Jorge Emilio Emrys Landivar

    What if you are just wrong?
    What if after a certain point adding more humans, has positive returns to scale that outweigh the negatives?

  • Matthew Hammer

    I think I would dispute Non-Antiegalitarianism. 

    Let’s imagine a world of Robin’s ems, specifically one in which all the ems are a copy of a single individual living several lives all of which are worth living. Compare that world to another with the same set of ems (same individual, same number of copies), all living the highest utility life of the first world. I find it hard to be impressed by the second world’s nominal higher total utility given that it’s just multiple copies of the same life being run multiple times. Likewise, if all the lives are not worth living, are lives of torture and suffering, it seems less bad to run the same virtual life repeatedly than to put the em through a variety of inventive tortures, even if the repeated life would be the worst. 

    Stated generally, I think there needs to be diminishing returns in the number of similar lives.

    Now, less hypothetically, I think interpersonal utility comparisons much less interpersonal utility summation is nonsense. Like adding a Coulomb to three Meters. Moreover we’re getting deep into the territory where philosophers start assuming all the features of a individual that makes them who they are can be brushed aside by the Rawlsian bullfighter’s cape of ignorance. Conclusions out here are all based on a rickety chain of assumptions and inferences. 

    So my only strong connection to the discussion is the question of how much I would value the different scenarios, and I think it’s natural to have diminishing returns in various similarities between the lives lived. But I think that is more relevant than one might think on the face of it. Because this discussion is really part of a collective negotiation between actual people over what sort of possible world to attempt to bring about. So if diminishing returns in types of lives lived is a common feature of the separate utility functions of the collection of people so negotiating, I would find it natural for it to be a feature of the negotiated utility function of the collective. (Though I suppose my intuition of what is natural is no substitute for the mathematical analysis of strategies in such games). 

    • GNZ

      I have some sympathy to the approach – but what if you were to find out that there was someone similar (or very similar) to you – by what degree would that lower your valuation of yourself or them?

      It seems to me only where the other was pretty much identical to me could i get to the point where i might even philosophically consider them significantly less valuable than those different from me.

  • Rafal Smigrodzki

    The problem with most attempts at formulating ethical theories, including the utilitarian ones, is the lack of a well-defined in-group to which the theory should apply. 

    As a matter of meta-ethics, a coherent ethical theory must meet some general structural requirements aside from its content, such as computability of rules from knowable preferences (a theory incapable of computing rules does not provide guidance, therefore it is not an ethical theory), or having proper methods of handling recursion between epistemology on which it is built and its own set of rules. Definition of an in-group is yet another such meta-ethical requirement that a good theory should have.

    In the absence of an in-group it is always possible to come up with an infinity of considerations/ethical subjects/issues that inevitably break any computable set of ethical rules (I don’t have a formal proof ….. somebody smart please write one). However, by judiciously choosing your in-group you can very easily reject the Repugnant Conclusion and trivially avoid the Mere Addition Paradox. Just try it: Make up a theory applicable only to e.g. yourself, or the set of all humans alive today, or all sentients that are not envious and have some well-selected set of desires, and see how easy it is to start actually making sense. A bit of self-reference in this process is OK, too.

    • john

      Perhaps the set of all sapient beings who lack strong preferences about anything and everything they’ll never be able to directly influence or observe? That is, those who are indifferent to the fate of Platonic abstractions, distant corners of the universe, etc. Idealists are considerably more expensive to satisfy.

  • Charles Zheng

    If you gave people the option to grant all humans eternal youth at the cost of sacrificing the ability to reproduce, what would most people choose?  We don’t value new lives, we value youth–creating new lives just happens to be the only way to create youth.

    • http://bur.sk/ Viliam Búr

      Creating new lives (in our tribe) also increases the size of the tribe, which usually correlates with its power. We value that too.

  • Hedonic Treader

    Once people stop confusing mere existence with utility, the “repugnant conclusion” will stop confusing them.

  • Cambias

    All of these arguments completely beg the question of whether increased population has anything at all to do with quality of life. Historical evidence suggests the opposite.

  • Richardsilliker

     ” Because political stability demands a powerful Schelling point,”

    Guns help too.

  • MPS17

    Funny; I come to a different conclusion.  My gut prefers B to A but I think it is because my gut is biased by a biological instinct that wants to stay alive.  Much like how I think some people live unpleasant lives (negative utility) but those people tend not want to die; I view this as a biological instinct that fears death that does not allow them to rationally confront the superiority of not existing to having an unpleasant existence.  So I think I should really prefer A to B:  no one existing but the utility monster isn’t bad for those people who don’t exist, as they don’t exist — and this includes me! — and so I think I should continue to reject the Repugnant Conclusion and continue to accept the Mere Addition Paradox.

  • http://kajsotala.fi/ Kaj Sotala

    I stopped considering the Repugnant Conclusion a problem due to comments
    such as
    http://lesswrong.com/lw/dso/the_mere_cable_channel_addition_paradox/73kv
    ,
    http://lesswrong.com/lw/dso/the_mere_cable_channel_addition_paradox/73×2
    and
    http://lesswrong.com/lw/dso/the_mere_cable_channel_addition_paradox/747c
    . I’ll just quote one of them (Michael Sullivan’s):

    “John Maxwell’s comment gets to the heart of the issue, the term “just
    barely worth living”. Philosophy always struggles where math meets
    natural language, and this is a classic example.

    “The phrase “just barely worth living” conjures up an image of a life
    that is barely better than the kind of neverending torture/loneliness
    scenario where we might consider encouraging suicide.

    “But the taboos against suicide are strong. Even putting aside taboos,
    there are large amounts of collateral damage from suicides. The most
    obvious is that anyone who has emotional or family connections to a
    suicide will suffer. Even people who are very isolated, will have some
    connection, and suicide could trigger grief or depression in any people
    who encounter them or their story. There are also some very scary
    studies about suicide and accident rates going up in the aftermath of
    publicized suicides or accidents, due to social lemming like programming
    in humans.

    “So it is quite rational for most people to not consider suicide until
    their personal utility is highly negative if they care at all about the
    people or world around them. For most of us, a life just above the
    suicide threshold would be a negative utility life and a fairly large
    negative utility.

    “A life with utility positive epsilon is not a life of sadness or pain, but a life that we would just barely choose
    to live, as a disembodied soul given a choice of life X or
    non-existence. Such a life, IMO will be comfortably clear of the suicide
    threshold, and would, in my opinion, represent an improvement in the
    world. Why wouldn’t it? It is by definition, a life that someone would
    choose to have rather than not have! How could that not improve the
    world?

    “Given this interpretation of “just barely worth living”, I accept the
    so-called Repugnant conclusion, and go happily on my way calculating
    utility functions.”

    • http://kajsotala.fi/ Kaj Sotala

       Or to put it more briefly, to quote Eliezer Yudkowsky’s comment:

      “I think that the term “barely worth living” is a terrible source of
      equivocation that underlies a lot of the apparent paradoxicalness.
      “Barely worth living” can mean that, if you’re already alive and don’t
      want to die, your life is almost but not quite horrible enough that you
      would rather commit suicide than endure. But if you’re told that
      somebody like this exists, it is sad news that you want to hear
      as little as possible. You may not want to kill them, but you also
      wouldn’t have that child if you were told that was what your child’s
      life would be like. What Parfit postulates should be called, to avoid
      equivocation, “A life barely worth celebrating” – it’s good news and you
      say “Yay!” but very softly.”

      • Stephen Diamond

        Notice, also, that if you conceive a life worth living as one substantially exceeding rather than barely exceeding the subsistence level, RH’s EM dystopia then involves a trillion lives not worth living.

    • V V

       

      So it is quite rational for most people to not consider suicide until
      their personal utility is highly negative if they care at all about the
      people or world around them. For most of us, a life just above the
      suicide threshold would be a negative utility life and a fairly large
      negative utility.

      Wrong.

      Without loss of generality, let zero be the agent’s subjective utility of world states where the agent is not alive.

      An agent with negative expected discounted utility will want to die, while an agent with positive expected discounted utility will want to live.
      All social taboos and considerations about the welfare of others, are already included in the utility.

      It is by definition, a life that someone would
      choose to have rather than not have! How could that not improve the world?

      Question begging.

      Subsistence farmers living near the edge of starvation rarely kill themselves. They are probably also not particularly unhappy for most of their (short) lives.

      Is a Malthusian world populated by many billions of subsistence farmers (or trillions hansonborgs) better than a world of < 1 billion people living in relative wealth?

      • Stephen Diamond

        Suicide doesn’t provide a zero point for utility because the very process of ending one’s life or even deciding to end it incurs massive disutilities, that being the real reason many people don’t commit suicide.

      • V V

        Are you talking about preference utility?

      • Stephen Diamond

        You mean as opposed to behaviorial utilities–Kahneman’s distinction? I suppose I’m talking about behavioral utilities, since they’re seemingly the relevant sort when you try to use suicide as a zero point.

        This line of thinking about suicide not being an absolute zero suggests some persons would be “better off dead”–but only if one concedes that it makes any sense to compare the utilities of being dead to being alive. But this abstract choice is never actually presented in real life, so there’s no reason to presume the concept of utility applies.

      • V V

        but only if one concedes that it makes any sense to compare the
        utilities of being dead to being alive. But this abstract choice is
        never actually presented in real life

        Actually it’s presented pretty much continuously. Rationally choosing what food you eat, how much exercise you take, whether to cross the road, etc., all require comparing the utilty of being alive with the utility of being dead.

        Of course humans are not ideal rational agents, and don’t have a well defined utility function, which makes the point moot.

      • Stephen Diamond

        Rationally choosing what food you eat, how much exercise you take, whether to cross the road, etc., all require comparing the utilty of being alive with the utility of being dead.

        If as I’m suggesting, the utility of being alive and dead are incommensurable (for want of the zero point’s being defined), then it’s not rational to compute the food you eat, etc. in the manner you (and apparently the utilitarians) believe we do. What we must calculate–to avoid adding apples and oranges–is the relative utility-disutility of the decision to die, as determined by affects such as guilt and (most of all) fear. I may think I don’t want to die, but I really don’t want to choose death. It’s not necessary to conceive of the “utility” of death to answer this question. Utility, after all, is an abstraction from our choosing between different states of being.

    • http://www.mccaughan.org.uk/g/ gjm

      See also, on the same theme and written years earlier, http://www.mccaughan.org.uk/g/essays/cui-bono.html :-). (I am not suggesting that the authors of the comments you mention had read that, though I have pointed at it in LW comments once or twice.)

    • Stephen Diamond

      EY is prone to think that clarifying a term resolves serious questions. Granting your presuppositions for argument’s sake, the problem remains: a sufficiently large population will always outweigh a much smaller one, regardless of there being a much higher level of welfare in smaller one. When EY’s only reason for believing utilitarianism is true is that he finds it corresponds to our “moral intuition,” the shock to the intuition should be decisive here–except that utilitarianism is based on a moralistic faith (
      http://tinyurl.com/cxjqxo9 )

  • http://www.facebook.com/yudkowsky Eliezer Yudkowsky

    I have advocated that “lives barely worth living” always be replaced with “lives barely worth celebrating” in every discussion of the ‘Repugnant’ Conclusion, to avoid equilibrating between “lives almost but not quite horrible enough to imply that a pre-existing person should commit suicide despite their intrinsic desire to live” versus “lives which we celebrate as good news upon learning about them, and hope to hear more such news in the future, but only to a very slight degree”.

    In a Big World, it’s impossible to create anyone; all you can decide is where to allocate measure among experiences.  My utilons for novelty are saturated by the size of reality, and that makes me an average utilitarian.  As an average utilitarian, I do indeed accept that “mere addition”, i.e., allocation of measure to experiences below-average for the global universe, is bad.  If it were, unimaginably, to be demonstrated to me that Earth and its descendants were the only sentient beings in all of Tegmark levels I through IV, then I would embrace the actual creation of new experiences, and accept the Repugnant Conclusion without a qualm.

    • Hedonic Treader

      “As an average utilitarian, I do indeed accept that “mere addition”, i.e., allocation of measure to experiences below-average for the global universe, is bad.”

      Why? The additional perspectives are still dislocated from the rest of the big world. If they feel good, how could their mere addition be bad?

      • Grognor

         Total measure can’t be increased or decreased, only rationed (and maybe not even that, but here’s hoping futility theories are wrong)

    • V V

      I have advocated that “lives barely worth living” always be replaced with “lives barely worth celebrating” [...] “lives which we celebrate as good news upon learning about them, and
      hope to hear more such news in the future, but only to a very slight
      degree”.

      That appears to be a circular definition in the context of a moral theory: The lives that it is good to create are the ones that it is good to create.

      In a Big World, it’s impossible to create anyone; all you can decide is where to allocate measure among experiences.

      What do you mean by Big World? If you mean a Malthusian world, then yes, you can’t add more people, but there is no need to push the world to Malthusian limits. Or do you mean some kind of multiverse, given that you mention Tegmark?

      My utilons for novelty are saturated by the size of reality, and that
      makes me an average utilitarian.  As an average utilitarian, I do indeed
      accept that “mere addition”, i.e., allocation of measure to experiences
      below-average for the global universe, is bad.  If it were,
      unimaginably, to be demonstrated to me that Earth and its descendants
      were the only sentient beings in all of Tegmark levels I through IV,
      then I would embrace the actual creation of new experiences, and accept
      the Repugnant Conclusion without a qualm.

      I’m having difficulties parsing that, but IIUC, you are reifying mathematical objects and assigning moral weight to them, so that you can claim that since all the possible universes with all the possible people exist, you don’t have to care about creating people in your universe, since any person you might create already exists in some other universe.

      Setting aside the fact that this reification is epistemologically questionable to say the least, note that it leads to a morally void position: you don’t have to care about killing either, because the person you kill still exists in other universes. In fact, whatever action you do, there are other universes where you did something else.

      Anyway, given that you consider yourself an average utilitarian, would you support killing off people whose utility was determined to be lower than the maximum? (assume that this doesn’t lower the utility of the remaining people)

    • Pablo

      I’m surprised to read that you are an average utilitarian, since this theory has clearly absurd implications. As I write in this Felicifia post:

      For instance, consider a world A in which people experience nothing but agonizing pain. Consider next a different world B which contains all the people in A, plus arbitrarily many people all experiencing pain only slightly less intense. Since the average pain in B is less than the average pain in A, average utilitarianism implies that B is better than A. This is clearly absurd, since B differs from A only in containing a surplus of agony.

      • V V

        Total utilitarianism has also absurd implications.
        I don’t think there is any way to salvage act utilitarianism.

        Maybe you could make the case for rule utilitarianism, but you would still have to solve the hard problem of interpersonal utility comparison and find some reasonable way to constrain the specificity of the rules (if you allow for arbitrarily specific rules then you reduce to act utilitarianism).

  • http://twitter.com/Johnicholas Johnicholas

    I reject the existence of individuals in general. The appearance of individuals is merely an artifact of our history. Many species that were well-adapted to an aquatic environment that subsequently move out of it might well use ‘integument’ and ‘immune system’ strategies that generally require and ensure a same-reproductive-fate for all genes within the integument or immune-system domain – so individualism is likely a reasonable and reasonably common form of life.

    However, life could certainly exist without individuals.

    Furthermore, I reject summation of utilities of different individuals. Utility functions are valuable in making decisions for yourself, and even in building semi-autonomous entities. But summing decision-theoretic utilities of different individuals is a violation of units.

    Behaving well, social welfare, is just not that simple; path-dependence is frequent. For example, in deciding whether to enfranchise someone, to decide that they are a citizen or a person, you might reasonably use soley the preferences of the people who are already enfranchised. But in deciding whether to disenfranchise someone, that person’s preferences do matter, as they already have “the vote”.

  • TruePath

    I also suspect another reason why we find the repugnant conclusion repugnant is a confusion about what it means for a life to be ‘barely worth living.’

    I suspect people imagine this as a life which is just barely preferable to suicide. However, pragmatically speaking our drive to survive, our need to condemn the murder of unhappy people, our need to account for an uncertain future even when emotionally we are convinced things won’t get better and our selfish desire for our loved ones not to commit suicide all conspire to drive our sense of what constitutes a life barely worth living down to a condition of mild (but not horrific) suffering.

  • http://twitter.com/ronmurp Ron Murphy

    I’m on unfamiliar ground here, so perhaps I’m off target. No
    doubt I will be informed or ignored appropriately.

    Since this post relates only population to utils I’ll focus
    on any plan we might have to determine future population, and I’ll ignore some
    of the other factors that might alter future utils, even though they may be
    indirectly related to future utils – e.g. undesirable climate change caused by
    near future population growth that harms more distant future populations.

    Two possible future worlds X-world and Y-world, with
    populations X > Y, with a difference D = X – Y, and with utility Uy > Ux
    (based on inverse population/utils, which seems to be assumed here).

    If Y is the outcome there will be D of X that never got to
    exist. As such the state of D, their lost lesser utils, could be ignored. Why
    worry about people who will never exist, from our perspective, or do not
    exists, from a Y-world perspective?

    So, if planning future populations, planning for X instead
    of Y is planning a state where utlility is less than it could be. Could members
    of population X look back and think badly of us for not maximising ‘their’
    utility (really the utils of Y that would have been born had Y-world been the
    case). We might think that provided individuals of X could not determine
    whether they would have been alive, as members of Y, or never conceived, as
    members of D, they have no way of knowing whether they would have benefited
    from our plan for Y-world or not.

    But, now thinking of yourself as some member of X at time
    Tx, if you learned from planning and controlled population data that you would
    have been a member of D and so would not now (at Tx) exist, why should it
    bother you at all? You should note that had you not been born you would not be
    alive to regret your lack of birth. It would be irrational for a member of D of
    X in X-world to lament his possible non-birth and not to lament more the
    reduced utils of being in X-world.

    In other words members of X-world could lament their being
    in X-world. They could regret not being in Y-world with Uy or not being born to
    regret anything. This knowledge of members of X would subtract further from
    their Ux. On the other hand members of Y-world considering their possible
    membership of X-world would be happier for their greater Uy, and so add futher
    to their Uy.

    The above might imply that the best future world consists of
    reducing X in order to increase Ux, possibly reducing X to 1 for some maximum
    Ux. Or maybe it implies the best state is X = 0: zero suffering but infinite,
    if unused, utils – i.e. extinction of intelligent self-aware species is the
    best possible outcome for them, if utils is inversely related to population.

    But I don’t see any reason to suspect that there is a
    continuous inverse relationship between X and Ux across all possible X. There
    may be some point where humans fail to flourish under small populations (e.g.
    lack of human resources required to maintain a world that can maintain Ux).
    There may well be some optimum Y and Uy:
    X > Y > Z and Ux Uz.

    If this latter distribution of utils across populations is
    the case, and given that there is no reason to worry about the non-existence of
    people who do not exist, it would seem obvious to aim for the lowest population
    Y that does not decrease Uy further.

    That leaves the problem of figuring out what Y and Uy are.

  • dt

    Utilitarians should try to maximise:

    Sum_( i = 1 to N) (U_i – U_min)

    where N = number of people, U_i = utility of ith individual and U_min is the minimum utility to make existence worthwhile. This is neither max average nor max total and avoids the problems of both.

    Similar to in finance corporations should maximise (profit – cost of capital).

  • JVA

    Why focus on “worth living”? Seeing that majority of future people will be EMs and will not be able to physically die, is there really a lower bound to their utility?