We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have
I have become much more sympathetic to Robin's position on this recently. The two main arguments that convinced me:
Some claim that the lives of most poor people / animals / etc are obviously not worth living, with their evidence being that they would prefer to spend an hour unconscious rather than spend that hour as (say) an average wild animal. But this kind of argument seems far too flimsy to support such claims.
For example, assuming I will live a long life, I would be happy to skip out, say, a randomly selected dull plane journey from some point in my future. But if told that I was about to experience a plane journey and then die afterward, and given the choice to just die immediately instead, I would MUCH rather experience those extra few hours, even spent sitting in an uncomfortable plane seat with my ears popping. And I expect that some ecstatically happy pampered transhuman would say, truthfully, that they would rather spend an hour unconscious than experience an hour of my life - yet I think my life is worth living, even the bad parts. And my assessment of an experience seems to depend very heavily on my prior expectations, and also on what my peers are doing - I'm much more willing to put up with something if I know I'm not alone in having to do so.
And so on. Also, the concept of a happiness set point seems to be quite well established. Upon examination this argument seems to me about as strong as folk economic claims about the 'true value' of a good, that if I wouldn't buy something then surely anyone who does is being ripped off.
Some other people claim that, even if the modal experience of a wild animal or poor person is positive utility, severe pain is so strongly negative that even a small amount of it can easily outweigh a life that's otherwise comprised of long periods of low, but positive, utility experiences. But I think this puts far too little emphasis on experience duration. I would MUCH rather experience a few seconds of severe pain, than a day of, say, a dull stomach ache.
Really the thing that frightens me most about severe pain is when it's accompanied by long-term effects like disfigurement. But if you're experiencing severe pain in the process of being killed, this doesn't apply - in fact this is probably the best time to experience disfigurement.
Yes this is ideological and in some sense political, so what?
Tastes aren't claims. Treating them as claims - which is what ideologies do - results in falsehood. (See Why do what you "ought"?—A habit theory of explicit morality - http://juridicalcoherence.b... )
The core issue seems to come down to different models of how ethics scales with pain vs pleasure, over many beings or over much time, etc.
Robin and Eliezer do something else. They assert (Robin) or try to establish (Eliezer) a particular correspondence. They aren't scaling any extant ethics. They are aesthetically enamored with the model and adopt the corresponding ethics, not as an ethics (which you can't simply adopt) but as an ideology.
Is it off topic? If this were a pure subjective exercise, yes. But it's a subjective exercise implicitly pretending to objectivity.
I think I understand your perspective but I'm not sure what it implies for this discussion. One can replace "values" with "tastes" and admit there is ideology being adopted, which presumably Robin and Wei Dai wouldn't object to. The core issue seems to come down to different models of how ethics scales with pain vs pleasure, over many beings or over much time, etc. Yes this is ideological and in some sense political, so what? In either case, I guess this is off topic.
I would most definitely not accept Maslow's Hierarchy of Needs. Anxiety and similar processes (or safety as he puts it) doesn't appear to be a higher order cognitive process, and so doesn't even function in remotely the same manner as the others. Self-actualization is way too complex of a term to be properly experimentally tested. Belonging and Esteem are too easily conflated. And he doesn't include anything that could be related to an escape function (maybe safety but safety appears to be related more to anxiety type issues, fight or flight, that sort of thing).
This whole "values" thing is, frankly, bogus. Your values are not much different from mine. The difference is in 1) tastes; and 2) your proclivity to treat your tastes as truths. (This is shown here by your treatment of values as if they can be premises in a sound argument.)
You simply don't fully understand the basic truth that science is value free. This fallacy is embedded in the dominant school of economics, which is built around realizing a particular value.
The function of an appeal to values is political. And even politically, this only makes sense when your values are widely shared.
[Added.] Values are deep and quasi-permanent. Tastes are often ephemeral, such as when they are the result of adopting an ideology. I can imagine worshiping a densely populated universe by deciding (arationally) to adopt the appropriate ideology. The difference is that you treat your ideology as a basis for unconditionally recommending policy.
I often run out of a basis on which to argue with folks who seem to just have different values. Perhaps such a basis can be found, but I don't see it at the moment.
Absent a formal debate, can you address negative-utilitarian concerns anyway? (I'm sympathetic to negative leaning utilitarianism myself.) Are people whose main worry is astronomical suffering and who think that a lifeless universe comparatively isn't that bad wrong or irrational somehow? Or is it just a matter of different values?
And the people who would trade a universe full of life for stability and their "higher needs" (the people you're worried about in this post), are they wrong, or just have different values from you?
The universe does appear dead, but perhaps it's not. Individuals can often climb Maslov's pyramid, but cultures/countries seem stuck at the bottom, obsessed with defending themselves and fighting wars for resources. Just as each individual must take their own steps to self actualization, It's possible that any civilization advanced enough to travel between stars is also intelligent enough to realize each civilization must find their own way. We are isolated for a reason. Every butterfly must escape from it's own chrysalis in order to become strong enough to survive.
How do you reconcile this post with the last one ('Seduced by Tech')? I may desire to become a self-actualized photographer but with technology I can simply add an Instagram filter to my smartphone photos. Technology propagates because we never escape 'near mode' at some level. Maybe the last post was demand side, and your concern here is restricted to the supply side.
I realize that "Repugnant Conclusion" is a name. I was objecting to the process that gave rise to the original name.
I would assume Robin was being rhetorical in talking about things being "repugnant," simply on account of that name. At least I don't feel real repugnance to the way the world is now, and I suspect that he does not either. I just think it would be better with more people. That is why I said that I "basically" agree.
I think that utilitarianism is false. But a selfish utilitarianism that says that 10,000 rich people are better than a billion poor people, is worse and falser than an unselfish one, especially since the only reason for someone to think this is that he hopes to be one of the 10,000.
I certainly do think that moral realism is true, in the sense that I think when I say "murder is bad," that is an objective fact about the world. As for "in which case they deserve our compassion," that is a moral claim itself. Thus in your opinion it is not a factual claim, and there is nothing to dispute about it.
In regard to your essay, you are mistaken to link free will and morality. The reason we don't think of morality being involved in the lives of dogs, is not that they don't have free will, but that they don't have reason. Moral good and moral evil mean "good and bad, judged according to reason." So if something doesn't have reason, it doesn't have moral good and moral evil. But once you have reason, you have good and bad, and that does not depend on free will one way or another; the fact that murder is bad does not depend on whether the person doing it could avoid it or not. The question of blame is secondary. You might think he is not to blame if he does not have free will; but it was still bad.
"Repugnant Conclusion" (as seen by the capitalization) is merely a name for the Mere Addition Paradox. If you want to shame someone for calling opponents evil, you should shame Robin for actually claiming that the opposite is "repugnant."
Utilitarianism is an ideology, not an ethics. Robin is OK with endorsing ideologies. Some of us think intellectuals should try to free themselves from ideologies rather than glory in them.
Debates about ideologies pretend that ideologies are "opinions" (as you do). This is an intellectually dishonest pretense, in that the disputants know that they aren't making factual claims. [Or else they really believe that moral realism is true, in which case they deserve our compassion. ("The deeper solution to the mystery of moralism—Morality and free will are hazardous to your mental health." - http://juridicalcoherence.b...]
"Repugnant Conclusion" is like calling the opinions of the people you disagree with "Evil Ideas" or something like that.
I certainly do think that a billion lives barely worth living are far, far better and more important than 10,000 lives at a very high standard of living.
If that wasn't the case, 10,000 people of that kind would be justified in wiping out a continent (trolley style) in order to preserve their lives, and that's obviously false.
And you think this will happen tomorrow and the carrying capacity of earth is unlimited? The demographic transition is just the beginning of long periods of slow or even negative growth that we are already adapting to. There is more possibility in virtual worlds but they may be further in the future and more limited than currently imagined.
But the basic point is that people now do not have more children until they reach a state ($100m in wealth) where childcare can be 100% outsourced and so children imply no tradeoffs in both a material and labor/leisure sense.
That is still a population that places 0 value on the joy/satisfaction/meaning of having children.
I'm only speculating. I don't know of a source for hard numbers on birth rates for the extremely wealthy (say, $100M+ USD net worth).
But I suspect their birth rates are well above average (adjusted for age; such people tend to be older).
Social attitudes toward childbearing trail economic circumstances - it takes a couple of generations before people adapt to new incentives. So newly middle-class people from impoverished agricultural backgrounds still have lots of kids for a generation or two.
On the other extreme, once people are wealthy enough that the costs of child care and education are negligible vs. other expenses, then the economic disincentive to reproduce disappears, leading to higher birth rates.
Of course that applies today to people who are wealthy both in absolute and relative terms - if everyone is equally wealthy in absolute terms, child care (mostly labor) remains expensive.
In the future we may have robots that can do child care. If so, we may see higher birth rates as a result. After a couple of generations to adapt.
Evolution doesn't stop. Selective pressures accrue, animals (or, agents, or corporations, or higher-level aggregates we lack words for) fill space, and the iron law of entropy makes demands. Ems-in-substrate outcompete meat-for-meat-gratification.
On the other hand, self-actualization allows us to care about things like "existential threats to the abstract notion of humanity" or "escaping the gravity well of earth".
I have become much more sympathetic to Robin's position on this recently. The two main arguments that convinced me:
Some claim that the lives of most poor people / animals / etc are obviously not worth living, with their evidence being that they would prefer to spend an hour unconscious rather than spend that hour as (say) an average wild animal. But this kind of argument seems far too flimsy to support such claims.
For example, assuming I will live a long life, I would be happy to skip out, say, a randomly selected dull plane journey from some point in my future. But if told that I was about to experience a plane journey and then die afterward, and given the choice to just die immediately instead, I would MUCH rather experience those extra few hours, even spent sitting in an uncomfortable plane seat with my ears popping. And I expect that some ecstatically happy pampered transhuman would say, truthfully, that they would rather spend an hour unconscious than experience an hour of my life - yet I think my life is worth living, even the bad parts. And my assessment of an experience seems to depend very heavily on my prior expectations, and also on what my peers are doing - I'm much more willing to put up with something if I know I'm not alone in having to do so.
And so on. Also, the concept of a happiness set point seems to be quite well established. Upon examination this argument seems to me about as strong as folk economic claims about the 'true value' of a good, that if I wouldn't buy something then surely anyone who does is being ripped off.
Some other people claim that, even if the modal experience of a wild animal or poor person is positive utility, severe pain is so strongly negative that even a small amount of it can easily outweigh a life that's otherwise comprised of long periods of low, but positive, utility experiences. But I think this puts far too little emphasis on experience duration. I would MUCH rather experience a few seconds of severe pain, than a day of, say, a dull stomach ache.
Really the thing that frightens me most about severe pain is when it's accompanied by long-term effects like disfigurement. But if you're experiencing severe pain in the process of being killed, this doesn't apply - in fact this is probably the best time to experience disfigurement.
Yes this is ideological and in some sense political, so what?
Tastes aren't claims. Treating them as claims - which is what ideologies do - results in falsehood. (See Why do what you "ought"?—A habit theory of explicit morality - http://juridicalcoherence.b... )
The core issue seems to come down to different models of how ethics scales with pain vs pleasure, over many beings or over much time, etc.
Robin and Eliezer do something else. They assert (Robin) or try to establish (Eliezer) a particular correspondence. They aren't scaling any extant ethics. They are aesthetically enamored with the model and adopt the corresponding ethics, not as an ethics (which you can't simply adopt) but as an ideology.
Is it off topic? If this were a pure subjective exercise, yes. But it's a subjective exercise implicitly pretending to objectivity.
I think I understand your perspective but I'm not sure what it implies for this discussion. One can replace "values" with "tastes" and admit there is ideology being adopted, which presumably Robin and Wei Dai wouldn't object to. The core issue seems to come down to different models of how ethics scales with pain vs pleasure, over many beings or over much time, etc. Yes this is ideological and in some sense political, so what? In either case, I guess this is off topic.
I would most definitely not accept Maslow's Hierarchy of Needs. Anxiety and similar processes (or safety as he puts it) doesn't appear to be a higher order cognitive process, and so doesn't even function in remotely the same manner as the others. Self-actualization is way too complex of a term to be properly experimentally tested. Belonging and Esteem are too easily conflated. And he doesn't include anything that could be related to an escape function (maybe safety but safety appears to be related more to anxiety type issues, fight or flight, that sort of thing).
This whole "values" thing is, frankly, bogus. Your values are not much different from mine. The difference is in 1) tastes; and 2) your proclivity to treat your tastes as truths. (This is shown here by your treatment of values as if they can be premises in a sound argument.)
You simply don't fully understand the basic truth that science is value free. This fallacy is embedded in the dominant school of economics, which is built around realizing a particular value.
The function of an appeal to values is political. And even politically, this only makes sense when your values are widely shared.
[Added.] Values are deep and quasi-permanent. Tastes are often ephemeral, such as when they are the result of adopting an ideology. I can imagine worshiping a densely populated universe by deciding (arationally) to adopt the appropriate ideology. The difference is that you treat your ideology as a basis for unconditionally recommending policy.
I often run out of a basis on which to argue with folks who seem to just have different values. Perhaps such a basis can be found, but I don't see it at the moment.
Absent a formal debate, can you address negative-utilitarian concerns anyway? (I'm sympathetic to negative leaning utilitarianism myself.) Are people whose main worry is astronomical suffering and who think that a lifeless universe comparatively isn't that bad wrong or irrational somehow? Or is it just a matter of different values?
And the people who would trade a universe full of life for stability and their "higher needs" (the people you're worried about in this post), are they wrong, or just have different values from you?
The universe does appear dead, but perhaps it's not. Individuals can often climb Maslov's pyramid, but cultures/countries seem stuck at the bottom, obsessed with defending themselves and fighting wars for resources. Just as each individual must take their own steps to self actualization, It's possible that any civilization advanced enough to travel between stars is also intelligent enough to realize each civilization must find their own way. We are isolated for a reason. Every butterfly must escape from it's own chrysalis in order to become strong enough to survive.
How do you reconcile this post with the last one ('Seduced by Tech')? I may desire to become a self-actualized photographer but with technology I can simply add an Instagram filter to my smartphone photos. Technology propagates because we never escape 'near mode' at some level. Maybe the last post was demand side, and your concern here is restricted to the supply side.
I realize that "Repugnant Conclusion" is a name. I was objecting to the process that gave rise to the original name.
I would assume Robin was being rhetorical in talking about things being "repugnant," simply on account of that name. At least I don't feel real repugnance to the way the world is now, and I suspect that he does not either. I just think it would be better with more people. That is why I said that I "basically" agree.
I think that utilitarianism is false. But a selfish utilitarianism that says that 10,000 rich people are better than a billion poor people, is worse and falser than an unselfish one, especially since the only reason for someone to think this is that he hopes to be one of the 10,000.
I certainly do think that moral realism is true, in the sense that I think when I say "murder is bad," that is an objective fact about the world. As for "in which case they deserve our compassion," that is a moral claim itself. Thus in your opinion it is not a factual claim, and there is nothing to dispute about it.
In regard to your essay, you are mistaken to link free will and morality. The reason we don't think of morality being involved in the lives of dogs, is not that they don't have free will, but that they don't have reason. Moral good and moral evil mean "good and bad, judged according to reason." So if something doesn't have reason, it doesn't have moral good and moral evil. But once you have reason, you have good and bad, and that does not depend on free will one way or another; the fact that murder is bad does not depend on whether the person doing it could avoid it or not. The question of blame is secondary. You might think he is not to blame if he does not have free will; but it was still bad.
"Repugnant Conclusion" (as seen by the capitalization) is merely a name for the Mere Addition Paradox. If you want to shame someone for calling opponents evil, you should shame Robin for actually claiming that the opposite is "repugnant."
Utilitarianism is an ideology, not an ethics. Robin is OK with endorsing ideologies. Some of us think intellectuals should try to free themselves from ideologies rather than glory in them.
Debates about ideologies pretend that ideologies are "opinions" (as you do). This is an intellectually dishonest pretense, in that the disputants know that they aren't making factual claims. [Or else they really believe that moral realism is true, in which case they deserve our compassion. ("The deeper solution to the mystery of moralism—Morality and free will are hazardous to your mental health." - http://juridicalcoherence.b...]
"Repugnant Conclusion" is like calling the opinions of the people you disagree with "Evil Ideas" or something like that.
I certainly do think that a billion lives barely worth living are far, far better and more important than 10,000 lives at a very high standard of living.
If that wasn't the case, 10,000 people of that kind would be justified in wiping out a continent (trolley style) in order to preserve their lives, and that's obviously false.
And you think this will happen tomorrow and the carrying capacity of earth is unlimited? The demographic transition is just the beginning of long periods of slow or even negative growth that we are already adapting to. There is more possibility in virtual worlds but they may be further in the future and more limited than currently imagined.
But the basic point is that people now do not have more children until they reach a state ($100m in wealth) where childcare can be 100% outsourced and so children imply no tradeoffs in both a material and labor/leisure sense.
That is still a population that places 0 value on the joy/satisfaction/meaning of having children.
I'm only speculating. I don't know of a source for hard numbers on birth rates for the extremely wealthy (say, $100M+ USD net worth).
But I suspect their birth rates are well above average (adjusted for age; such people tend to be older).
Social attitudes toward childbearing trail economic circumstances - it takes a couple of generations before people adapt to new incentives. So newly middle-class people from impoverished agricultural backgrounds still have lots of kids for a generation or two.
On the other extreme, once people are wealthy enough that the costs of child care and education are negligible vs. other expenses, then the economic disincentive to reproduce disappears, leading to higher birth rates.
Of course that applies today to people who are wealthy both in absolute and relative terms - if everyone is equally wealthy in absolute terms, child care (mostly labor) remains expensive.
In the future we may have robots that can do child care. If so, we may see higher birth rates as a result. After a couple of generations to adapt.
Evolution doesn't stop. Selective pressures accrue, animals (or, agents, or corporations, or higher-level aggregates we lack words for) fill space, and the iron law of entropy makes demands. Ems-in-substrate outcompete meat-for-meat-gratification.
On the other hand, self-actualization allows us to care about things like "existential threats to the abstract notion of humanity" or "escaping the gravity well of earth".