"What's the worst that can happen?" goes the optimistic saying.  It's probably a bad question to ask anyone with a creative imagination.  Let's consider the problem on an individual level: it's not really the worst that can happen, but would nonetheless be fairly bad, if you were horribly tortured for a number of years.  This is one of the worse things that can realistically happen to one person in today's world.

What's the least bad, bad thing that can happen?  Well, suppose a dust speck floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck.

For our next ingredient, we need a large number.  Let's use 3^^^3, written in Knuth's up-arrow notation:

  • 3^3 = 27.
  • 3^^3 = (3^(3^3)) = 3^27 = 7625597484987.
  • 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = (3^(3^(3^(... 7625597484987 times ...)))).

3^^^3 is an exponential tower of 3s which is 7,625,597,484,987 layers tall.  You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a number much larger than the number of atoms in the universe, but which could still be written down in base 10, on 100 square kilometers of paper; then raise 3 to that power; and continue until you've exponentiated 7625597484987 times.  That's 3^^^3.  It's the smallest simple inconceivably huge number I know.

Now here's the moral dilemma.  If neither event is going to happen to you personally, but you still had to choose one or the other:

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

I think the answer is obvious.  How about you?

New Comment
626 comments, sorted by Click to highlight new comments since: Today at 3:16 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Does this analysis focus on pure, monotone utility, or does it include the huge ripple effect putting dust specks into so many people's eyes would have? Are these people with normal lives, or created specifically for this one experience?

1lockeandkeynes14y
I think you can be allowed to imagine that any ripple effect caused by someone getting a barely-noticeable dust speck in their eyes (perhaps it makes someone mad enough to beat his dog) would be about the same as that of the torture (perhaps the torturers go home and beat their dogs because they're so desensitized to torturing).
3VAuroch9y
The ripple effect is real, but as in Pascal's Wager, for every possible situation where the timing is critical and something bad will happen if you are distracted for a moment, there's a counterbalancing situation where the timing is critical and something bad will happen unless you are distracted for a moment, so those probably balance out into noise.
1DragonGod7y
I doubt this.
1VAuroch7y
Why?

The answer that's obvious to me is that my mental moral machinery -- both the bit that says "specks of dust in the eye can't outweigh torture, no matter how many there are" and the bit that says "however small the badness of a thing, enough repetition of it can make it arbitrarily awful" or "maximize expected sum of utilities" -- wasn't designed for questions with numbers like 3^^^3 in. In view of which, I profoundly mistrust any answer I might happen to find "obvious" to the question itself.

Isn't this just appeal to humility? If not, what makes this different?

5MathMage6y
It is not humility to note that extrapolating models unimaginably far beyond their normal operating ranges is a fraught business. Just because we can apply a certain utility approximation to our monkeysphere, or even a few orders of magnitude above our monkeysphere, doesn't mean the limiting behavior matches our approximation.
0adamisom11y
In other words, you're meta-cogitation is 1 - do I trust my very certain intuition? or 2 - do I trust the heuristic from formal/mathematical thinking (that I see as useful partially and specifically to compensate for inaccuracies in our intuition)?

Since there was a post about what seems obvious to the speaker might not be to the listener in this blog a few days ago, I thought I would point out that : It was NOT AT ALL obvious to me what should be preferred, torture 1 man for 50 years or speck of dust in 3^^^3 people. Can you please plase clarify/update what the point of the post was?

The dust speck is described as "barely enough to make you notice", so however many people it would happen to, it seems better than even something a lot less worse than 50 years of horrible torture. There are so many irritating things that a human barely notices in his/her life, what's an extra dust speck?

I think I'd trade the dust specks for even a kick in the groin.

But hey, maybe I'm missing something here...

-1Insert_Idionym_Here12y
If 3^^^3 people get dust in their eye, an extraordinary number of people will die. I'm not thinking even 1% of those affected will die, but perhaps 0.000000000000001% might, if that. But when dealing with numbers this huge, I think the death toll would measure greater than 7 billion. Knowing this, I would take the torture.

If 3^^^3 people get dust in their eye, an extraordinary number of people will die.

The premise assumes it's "barely enough to make you notice", which was supposed to rule out any other unpleasant side-effects.

-1Insert_Idionym_Here12y
No, I'm pretty sure it makes you notice. It's "enough". "barely enough", but still "enough". However, that doesn't seem to be what's really important. If I consider you to be correct in your interpretation of the dilemma, in that there are no other side effects, then yes, the 3^^^3 people getting dust in their eyes is a much better choice.
2dlthomas12y
The thought experiment is, 3^^^3 bad events, each just so bad that you notice their badness. Considering consequences of the particular bad thing means that in fact there are other things as well that are depending on your choice, and that's a different thought experiment.
-4Insert_Idionym_Here12y
That is in no way what was said. Also, the idea of an event that somehow manages to have no effect aside from being bad is... insanely contrived. More contrived than the dilemma itself. However, let's say that instead of 3^^^3 people getting dust in their eye, 3^^^3 people experience a single nano-second of despair, which is immediately erased from their memory to prevent any psychological damage. If I had a choice between that and torturing a person for 50 years, then I would probably choose the former.
2dlthomas12y
The notion of 3^^^3 events of any sort is far more contrived than the elimination of knock-on effects of an event. There isn't enough matter in the universe to make that many dust specks, let alone the eyes to be hit and nervous systems to experience it. Of course it's contrived. It's a thought experiment. I don't assert that the original formulation makes it entirely clear; my point is to keep the focus on the actual relevant bit of the experiment - if you wander, you're answering a less interesting question.
1Insert_Idionym_Here12y
I don't agree. The existence 3^^^3 people, or 3^^^3 dust specks, is impossible because there isn't enough matter, as you said. The existence of an event that has only effects that are tailored to fit a particular person's idea of 'bad' does not fit my model of how causality works. That seems like a worse infraction, to me. However, all of that is irrelevant, because I answered the more "interesting question" in the comment you quoted. To be blunt, why are we still talking about this?
0dlthomas12y
I'm not sure I agree, but "which impossible thing is more impossible" does seem an odd thing to be arguing about, so I'll not go into the reasons unless someone asks for them. I meant a more generalized you, in my last sentence. You in particular did indeed answer the more interesting question.
1dlthomas12y
Can you explain a bit about your moral or decision theory that would lead you to conclude that?
2Insert_Idionym_Here12y
Yes. I believe that because any suffering caused by the 3^^^3 dust specks is spread across 3^^^3 people, it is of lesser evil than torturing a man for 50 years. Assuming there to be no side effects to the dust specks.
-1Nornagest12y
That's not general enough to mean very much: it fits a number of deontological moral theories and a few utilitarian ones (what the right answer within virtue ethics is is far too dependent on assumptions to mean much), and seems to fit a number of others if you don't look too closely. Its validity depends greatly on which you've picked. As best I can tell the most common utilitarian objection to TvDS is to deny that Specks are individually of moral significance, which seems to me to miss the point rather badly. Another is to treat various kinds of disutility as incommensurate with each other, which is at least consistent with the spirit of the argument but leads to some rather weird consequences around the edge cases.
-1Insert_Idionym_Here12y
No-one asked for a general explanation. The best term I have found, the one that seems to describe the way I evaluate situations the most accurately, is consequentialism. However, that may still be inaccurate. I don't have a fully reliable way to determine what consequentialism entails; all I have is Wikipedia, at the moment. I tend to just use cost-benefit analysis. I also have a mental, and quite arbitrary, scale of what things I do and don't value, and to what degree, to avoid situations where I am presented with multiple, equally beneficial choices. I also have a few heuristics. One of them essentially says that given a choice between a loss that is spread out amongst many, and an equal loss divided amongst the few, the former is the more moral choice. Does that help?
0Nornagest12y
It helps me understand your reasoning, yes. But if you aren't arguing within a fairly consistent utilitarian framework, there's not much point in trying to convince others that the intuitive option is correct in a dilemma designed to illustrate counterintuitive consequences of utilitarianism. So far it sounds like you're telling us that Specks is intuitively more reasonable than Torture, because the losses are so small and so widely distributed. Well, yes, it is. That's the point.
0Insert_Idionym_Here12y
At what point is utilitarianism not completely arbitrary?
0Nornagest12y
I'm not a moral realist. At some point it is completely arbitrary. The meta-ethics here are way outside the scope of this discussion; suffice it to say that I find it attractive as a first approximation of ethical behavior anyway, because it's a simple way of satisfying some basic axioms without going completely off the rails in situations that don't require Knuth up-arrow notation to describe. But that's all a sideline: if the choice of moral theory is arbitrary, then arguing about the consequences of one you don't actually hold makes less sense than it otherwise would, not more.
0Insert_Idionym_Here12y
I believe I suggested earlier that I don't know what moral theory I hold, because I am not sure of the terminology. So I may, in fact, be a utilitarian, and not know it, because I have not the vocabulary to say so. I asked "At what point is utilitarianism not completely arbitrary?" because I wanted to know more about utilitarianism. That's all.
0Nornagest12y
Ah. Well, informally, if you're interested in pissing the fewest people off, which as best I can tell is the main point where moral abstractions intersect with physical reality, then it makes sense to evaluate the moral value of actions you're considering according to the degree to which they piss people off. That loosely corresponds to preference utilitarianism: specifically negative preference utilitarianism, but extending it to the general version isn't too tricky. I'm not a perfect preference utilitarian either (people are rather bad at knowing what they want; I think there are situations where what they actually want trumps their stated preference; but correspondence with stated preference is itself a preference and I'm not sure exactly where the inflection points lie), but that ought to suffice as an outline of motivations.
0Insert_Idionym_Here12y
Thank you.
0dlthomas12y
That's not quite what I meant by "explain" - I had understood that to be your position, and was trying to get insight into your reasoning. Drawing an analogy to mathematics, would you say that this is an axiom, or a theorem? If an axiom, it clearly must be produced by a schema of some sort (as you clearly don't have 3^^^3 incompressible rules in your head). Can you explore somewhat the nature of that schema? If a theorem, what sort of axioms, and how arranged, produce it?
0TimS12y
When I participated in this debate, this post convinced me that a utilitarian must believe that dust specks cause more overall suffering (or whatever badness measure you prefer). Since I already wasn't a utilitarian, this didn't bother me.
2dlthomas12y
As a utilitarian (in broad strokes), I agree, and this doesn't bother me because this example is so far out of the range of what is possible that I don't object to saying, "yes, somewhere out there torture might be a better choice." I don't have to worry about that changing what the answer is around these parts.

Anon, I deliberately didn't say what I thought, because I guessed that other people would think a different answer was "obvious". I didn't want to prejudice the responses.

0[anonymous]9y
So what do you think?
6dxu9y
He gives his answer here.
-1[anonymous]9y
Thank you!
0coldlyrationalogic9y
Exactly, if Elizier had went out and said what he thought, nothing good would come out of it. The point is to make you think.

Even when applying the cold cruel calculus of moral utilitarianism, I think that most people acknowledge that egalitarianism in a society has value in itself, and assign it positive utility. Would you rather be born into a country where 9/10 people are destitute (<$1000/yr), and the last is very wealthy (100,000/yr)? Or, be born into a country where almost all people subsist on a modest (6-8000/yr) amount?

Any system that allocates benefits (say, wealth) more fairly might be preferable to one that allocates more wealth in a more unequal fashion. And, the same goes for negative benefits. The dust specks may result in more total misery, but there is utility in distributing that misery equally.

0DanielLC14y
I don't believe egalitarianism has value in itself. Tell me, would you rather get all your wealth continuously throughout the year, or get a disproportionate amount on Christmas? If wealth is evenly distributed, it will lead to more total happiness, but I don't see any advantage in happiness being evenly distributed. I don't see how your comment relates to this post.
1byrnema13y
Perhaps it could be framed in terms of the utility of psychological comfort. Suppose that one person is tortured to avoid 3^^^3 people getting dust specks. Won't almost every one of those 3^^^3 people empathize with the tortured person enough to feel a pang of discomfort more uncomfortable than a dust speck?
8jimrandomh13y
Only if they find out that the tortured person exists, which would be an event that's not in the problem statement.
2Mestroyer12y
Well, there's valuing money at more utility per dollar when you have less money and less utility per dollar when you have more money, which makes perfect sense. But that's not the same as egalitarianism as part of utility.
-2JasonCoston11y
Third-to-last sentence sets up a false dichotomy between "more fairly" and "more unequal."

The dust specks seem like the "obvious" answer to me, but how large the tiny harm must be to cross the line where the unthinkably huge number of them outweighs a single tremendous one isn't something I could easily say, when clearly I don't think simply calculating the total amount of harm caused is the right measure.

It seems obvious to me to choose the dust specks because that would mean that the human species would have to exist for an awfully long time for the total number of people to equal that number and that minimum amount of annoyance would be something they were used to anyway.

I too see the dust specks as obvious, but for the simpler reason that I reject utilitarian sorts of comparisons like that. Torture is wicked, period. If one must go further, it seems like the suffering from torture is qualitatively worse than the suffering from any number of dust specks.

-2[anonymous]10y
I think you have misunderstood the point of the thought experiment. Eliezer could have imagined that the intense and prolonged suffering experienced by the victim was not intentionally caused, but was instead the result of natural causes. The "torture is wicked" reply cannot be used to resist the decision to bring about this scenario. (There may, of course, be other reasons for objecting to that decision.)

Anon prime: dollars are not utility. Economic egalitarianism is instrumentally desirable. We don't normally favor all types of equality, as Robin frequently points out.

Kyle: cute

Eliezer: My impulse is to choose the torture, even when I imagine very bad kinds of torture and very small annoyances (I think that one can go smaller than a dust mote, possibly something like a letter on the spine of a book that your eye sweeps over being in a shade less well selected a font). Then, however, I think of how much longer the torture could last and still not outweigh the trivial annoyances if I am to take the utilitarian perspective and my mind breaks. Condoning 50 years of torture, or even a day worth, is pretty much the same as condoning universes of agonium lasting for eons in the face of numbers like these, and I don't think that I can condone that for any amount of a trivial benefit.

7Eliezer Yudkowsky11y
(This was my favorite reply, BTW.)

I admire the restraint involved in waiting nearly five years before selecting a favorite.

1Friendly-HI10y
Well too bad he didn't wait a year longer then ;). I think preferring torture is the wrong answer for the same reason that I think universal health-care is a good idea. The financial cost of serious illness and injury is distributed over the taxpaying population so no single individual has to deal with a spike in medical costs ruining their life. And I think it's still the correct moral choice regardless of whether universal health-care happens to be more expensive or not. Analogous I think the exact same applies to dust vs torture. I don't think the correct moral choice is about minimizing the total area under the pain-curve at all, it's about avoiding severe pain-spikes for any given individual even at the cost of having a larger area under the curve. I don't think "shut up and multiply" applies here in it's simplistic conception in the way it might apply in the scenario where you have to choose whether 400 people live for sure or 500 people live with .9 probability (and die with .1 probability). ---------------------------------------- Irrespective of the former however, the thought experiment is a bit problematic because it's more complex than apparent at first, if we really take it seriously. Eliezer said the dust-specks are "barely noticed", but being conscious or aware of something isn't an either-or thing, awareness falls on a continuum so whatever "pain" the dust-specks causes has to be multiplied by how aware the person really is. If someone is tortured that person is presumably very aware of the physical and emotional pain. Other possible consequences like lasting damage or social repercussions not counting, I don't really care all that much about any kind of pain that happens to me while I'm not aware of it. I could probably figure out whether or not pain is actually registered in my brain during having my upcoming operation under anesthesia, but the fact that I won't bother tells me very clearly, that awareness of pain is an important weight we have
1Jiro10y
If you're going to say that, you'll need some threshhold, and pain over the threshhold makes the whole society count as worse than pain under the threshhold. This will mean that any number of people with pain X is better than one person with pain X + epsilon, where epsilon is very small but happens to push it over the threshhold. Alternately, you could say that the disutility of pain gradually changes, but that has other problems. I suggest you read up on the repugnant conclusion ( http://plato.stanford.edu/entries/repugnant-conclusion/ )--depending on exactly what you mean, what you suggest is similar to the proposed solutions, which don't really work.

Personally, I choose C: torture 3^^^3 people for 3^^^3 years. Why? Because I can.

Ahem. My morality is based on maximizing average welfare, while also avoiding extreme individual suffering, rather than cumulative welfare.

So torturing one man for fifty years is not preferable to annoying any number of people.

This is different when the many are also suffering extremely, though - then it may be worthwhile to torture one even more to save the rest.

Trivial annoyances and torture cannot be compared in this quantifiable manner. Torture is not only suffering, but lost opportunity due to imprisonment, permanent mental hardship, activation of pain and suffering processes in the mind, and a myriad of other unconsidered things.

And even if the torture was 'to have flecks of dust dropped in your eyes', you still can't compare a 'torturous amount' applied to one person, to substantial number dropped in the eyes of many people: We aren't talking about cpu cycles here - we are trying to quantify qualifiables.

If ... (read more)

5DanielLC14y
Can you compare apples and oranges? You certainly don't seem to have much trouble when you decide how to spend your money at the grocery store. It was rather clear from the context that the "dust in the eye" was a very, very minor torture. People are not going blind. They are perfectly capable of dealing with it. It's just not 3^^^3 times as minor as the torture. If you were to torture two people in exactly the same way, they'd suffer about equally. Why do you imply that's some sort of unanswerable question? If you weren't talking about the ethical side, what were you talking about? He wasn't trying to compare everything about the two choices, just which was more ethical. It would be impossible if he didn't limit it like that.
0snewmark8y
I'm pretty sure the question itself revolves around ethics, as far as I can tell the question is: given these 2 choices, which would you consider, ethically speaking, the ideal option?

I think this all revolves around one question: Is "disutility of dust speck for N people" = N*"disutility of dust speck for one person"?

This, of course, depends on the properties of one's utility function.

How about this... Consider one person getting, say, ten dust specks per second for an hour vs 106060 = 36,000 people getting a single dust speck each.

This is probably a better way to probe the issue at its core. Which of those situations is preferable? I would probably consider the second. However, I suspect one person getting a billion dust specks in their eye per second for an hour would be preferable to 1000 people getting a million per second for an hour.

Suffering isn't linear in dust specks. Well, actually, I'm not sure subjective states in general can be viewed in a linear way. At least, if there is a potentially valid "linear qualia theory", I'd be surprised.

But as far as the dust specks vs torture thing in the original question? I think I'd go with dust specks for all.

But that's one person vs buncha people with dustspecks.

Oh, just had a thought. A less extreme yet quite related real world situation/question would be this: What is appropriate punishment for spammers?

Yes, I understand there're a few additional issues here, that would make it more analogous to, say, if the potential torturee was planning on deliberately causing all those people a DSE (Dust Speck Event)

But still, the spammer issue gives us a more concrete version, involving quantities that don't make our brains explode, so considering that may help work out the principles by which these sorts of questions can be dealt with.

The problem with spammers isn't the cause of a singular dust spec event: it's the cause of multiple dust speck events repeatedly to individuals in the population in question. It's also a 'tragedy of the commons' question, since there is more than one spammer.

To respond to your question: What is appropriate punishment for spammers? I am sad to conclude that until Aubrey DeGray manages to conquer human mortality, or the singularity occurs, there is no suitable punishment for spammers.

After either of those, however, I would propose unblocking everyone's toilets and/or triple shifts as a Fry's Electronics floor lackey until the universal heat death, unless you have even >less< interesting suggestions.

If you could take all the pain and discomfort you will ever feel in your life, and compress it into a 12-hour interval, so you really feel ALL of it right then, and then after the 12 hours are up you have no ill effects - would you do it? I certainly would. In fact, I would probably make the trade even if it were 2 or 3 times longer-lasting and of the same intensity. But something doesn't make sense now... am I saying I would gladly double or triple the pain I feel over my whole life?

The upshot is that there are some very nonlinear phenomena involved with calculating amounts of suffering, as Psy-Kosh and others have pointed out. You may indeed move along one coordinate in "suffering-space" by 3^^^3 units, but it isn't just absolute magnitude that's relevant. That is, you cannot recapitulate the "effect" of fifty years of torturing with isolated dust specks. As the responses here make clear, we do not simply map magnitudes in suffering space to moral relevance, but instead we consider the actual locations and contours. (Compare: you decide to go for a 10-mile hike. But your enjoyment of the hike depends more on where you go, than the distance traveled.)

8JoeSchmoe15y
"If you could take all the pain and discomfort you will ever feel in your life, and compress it into a 12-hour interval, so you really feel ALL of it right then, and then after the 12 hours are up you have no ill effects - would you do it? I certainly would."" Hubris. You don't know, can't know, how that pain would/could be instrumental in processing external stimuli in ways that enable you to make better decisions. "The sort of pain that builds character, as they say". The concept of processing 'pain' in all its forms is rooted very deep in humanity -- get rid of it entirely (as opposed to modulating it as we currently do), and you run a strong risk of throwing the baby out with the bathwater, especially if you then have an assurance that your life will have no pain going forward. There's a strong argument to be made for deference to traditional human experience in the face of the unknown.

Yes the answer is obvious. The answer is that this question obviously does not yet have meaning. It's like an ink blot. Any meaning a person might think it has is completely inside his own mind. Is the inkblot a bunny? Is the inkblot a Grateful Dead concert? The right answer is not merely unknown, because there is no possible right answer.

A serious person-- one who take moral dilemmas seriously, anyway-- must learn more before proceeding.

The question is an inkblot because too many crucial variables have been left unspecified. For instance, in order for thi... (read more)

The non-linear nature of 'qualia' and the difficulty of assigning a utility function to such things as 'minor annoyance' has been noted before. It seems to some insolvable. One solution presented by Dennett in 'Consciousness Explained' is to suggest that there is no such thing as qualia or subjective experience. There are only objective facts. As Searle calls it 'consciousness denied'. With this approach it would (at least theoretically) be possible to objectively determine the answer to this question based on something like the number of ergs needed to... (read more)

Uh... If there's no such thing as qualia, there's no such thing as actual suffering, unless I misunderstand your description of Dennett's views.

But if my understanding is correct, and those views were correct, then wouldn't the answer be "nobody actually exists to care one way or another?" (Or am I sorely mistaken in interpreting that view?)

Regarding your example of income disparity: I might rather be born into a system with very unequal incomes, if, as in America (in my personal and biased opinion), there is a reasonable chance of upping my income through persistence and pluck. I mean hey, that guy with all that money has to spend it somewhere-- perhaps he'll shop at my superstore!

But wait, what does wealth mean? In the case where everyone has the same income, where are they spending their money? Are they all buying the same things? Is this a totalitarian state? An economy without disparity ... (read more)

If even one in a hundred billion of the people is driving and has an accident because of the dust speck and gets killed, that's a tremendous number of deaths. If one in a hundred quadrillion of them survives the accident but is mangled and spends the next 50 years in pain, that's also a tremendous amount of torture.

If one in a hundred decillion of them is working in a nuclear power plant and the dust speck makes him have a nuclear accident....

We just aren't designed to think in terms of 3^^^3. It's too big. We don't habitually think much about one-in-a-million chances, much less one in a hundred decillion. But a hundred decillion is a very small number compared to 3^^^3.

1marenz13y
I would say that it is pretty easy to think in terms of 3^^^3. Just assume that everything that could happen due to a dust speck in your eye, will happen.
7ata13y
That is an interesting argument (I've considered it before) though I think it misses the point of the thought experiment. As I understand it, it's not about any of the possible consequences of the dust specks, but about specks as (very minor) intrinsically bad things themselves. It's about whether you're willing to measure the unpleasantness of getting a dust speck in your eye on the same scale as the unpleasantness of being tortured, as (vastly) different in degree rather than fundamentally different in kind.
0homunq12y
How do you know that more accidents are caused than avoided by dust specks? (Of course I realize I'm saying "you" to a 5-year-old comment but you get the picture.)

Douglas and Psy-Kosh: Dennett explicitly says that in denying that there are such things as qualia he is not denying the existence of conscious experience. Of course, Douglas may think that Dennett is lying or doesn't understand his own position as well as Douglas does.

James Bach and J Thomas: I think Eliezer is asking us to assume that there are no knock-on effects in either the torture or the dust-speck scenario, and the usual assumption in these "which economy would you rather have?" questions is that the numbers provided represent the situati... (read more)

J Thomas: You're neglecting that there might be some positive-side effects for a small fraction of the people affected by the dust specks; in fact, there is some precedent for this. The resulting average effect is hard to estimate, but (considering that dust specks seem to mostly add entropy to the thought processes of the affected persons), would likely still be negative.

Copying g's assumption that higher-order effects should be neglected, I'd take the torture. For each of the 3^^^3 persons, the choice looks as follows:

1.) A 1/(3^^^3) chance of being tort... (read more)

Hmm, tricky one.

Do I get to pick the person who has to be tortured?

As I read this I knew my answer would be the dust specks. Since then I have been mentally evaluating various methods for deciding on the ethics of the situation and have chosen the one that makes me feel better about the answer I instinctively chose.

I can tell you this though. I reckon I personally would choose max five minutes of torture to stop the dust specks event happening. So if the person threatened with 50yrs of torture was me, I'd choose the dust specks.

What if it were a repeatable choice?

Suppose you choose dust specks, say, 1,000,000,000 times. That's a considerable amount of torture inflicted on 3^^^3 people. I suspect that you could find the number of times equivalent to torturing each of thoes 3^^^3 people 50 years, and that number would be smaller than 3^^^3. In other words, choose the dust speck enough times, and more people would be tortured effectually for longer than if you chose the 50-year torture an equivalent number of times.

If that math is correct, I'd have to go with the torture, not the dust specks.

0themusicgod110y
Likewise, if this was iterated 3^^^3+1 times(ie 3^^^3 plus the reader),it could easily be 50*3^^^3 (ie > 3^^^3+1) people tortured. The odds are if it's possible for you to make this choice, unless you have reason to believe otherwise they may too, making this an implicit prisoner's dilemma of sorts. On the other side, 3^^^3 specks could possibly crush you, and/or your local cluster of galaxies into a black hole, so there's that to consider if you consider the life within meaningful distance of of every one of those 3^^^3 people valuable.
2Benquo10y
I'm not sure I follow your argument. I'm going to assume that for a single person, 3^^3 dust specks = 50 years of torture. (My earlier figure seems wrong, but 3^^3 dust specks over 50 years is a little under 5,000 dust specks per second.) I'm going to ignore the +1 because these are big numbers already. If this were iterated 3^^^3 times, then we have the choice between: TORTURE: 3^^^3 people are each tortured for 50 years, once. DUST SPECKS: 3^^^3 people are tortured for 50 years, repeated (3^^^3)/(3^^3)=3^(3^^3-3^3) times.
0themusicgod110y
The probability I'm the only person person selected out of 3^^^3 for such a decision p(i) is less than any reasonable estimate of how many people could be selected, imho. Let's say well below 700dB against. The chances are much greater that some probability fo those about to be dust specked or tortured also gets this choice (p(k)). p(k)*3^^^3 > p(i) => 3^^^3 > p(i)/p(k) => true for any reasonable p(i)/p(k) So this means that the effective number of dust particles given to each of us is going to be roughly (1-p(i))p(k)3^^^3. I'm going to assume any amount of dust larger in mass than a few orders of magnitude above the Chandrasekhar limit (1e33 kg) is going to result in a black hole. I can even assume a significant error margin in my understanding of how black holes work, and the reuslts do not change. The smallest dust particle is probably a single hydrogen atom(really everything resoles to hydrogen at small enough quantities, right?). 1 mol of hydrogen weighs about 1 gram. So (1-p(i))(p(k)3^^^3 (1 gram/mol)(6e-23 'specks'/mol) (1e-3 kg/g) (1e-33 kg/black hole) = roughly ( 3^^^3 ) (~1e-730) = roughly 3^^^3 black holes. ie 3^(3_1^3_2^3_3^...^3_7e13 -730) = roughly 3^(3_1^3_2^3_3^...^3_7e13) ie 3_1^3_2^3_3^...^3_7e13 - 730 = roughly 3_1^3_2^3_3^...^3_7e13. In conclusion, I think at this level, I would choose 'cancel' / 'default' / 'roll a dice and determine the choice randomly/not choose' BUT would woefully update my concept of the sizee of the universe to contain enough mass to even support a reasonably infentessimal probability of some proportion of 3^^^3 specks of dust, and 3^^^3 people or at least some reasonable proportion thereof. The question I have now is how is our model of the universe to update given this moral dillema? What is the new radius of the universe given this situation? It can't be big enough for 3^^^3 dust specks piled on the edge of our universe outside of our light cone somewhere. Either way I think the new radius ought to be termed the "
1Benquo10y
I don't really care what happens if you take the dust speck literally; the point is to exemplify an extremely small disutility.
0themusicgod110y
I suppose you could view the utility as a meaninful object in this frame and abstract away the dust, too, but in the end the dust-utility system is going to encompaps both anyway so solving the problem on either level is going to solve it on both.

Kyle wins.

Absent using this to guarantee the nigh-endless survival of the species, my math suggests that 3^^^3 beats anything. The problem is that the speck rounds down to 0 for me.

There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0... (read more)

1ThoughtSpeed7y
Why would that round down to zero? That's a lot more people having cancer than getting nuked! (It would be hilarious if Zubon could actually respond after almost a decade)

Wow. The obvious answer is TORTURE, all else equal, and I'm pretty sure this is obvious to Eliezer too. But even though there are 26 comments here, and many of them probably know in their hearts torture is the right choice, no one but me has said so yet. What does that say about our abilities in moral reasoning?

Given that human brains are known not to be able to intuitively process even moderately large numbers, I'd say the question can't meaningfully be asked - our ethical modules simply can't process it. 3^^^3 is too large - WAY too large.

I'm unconvinced that the number is too large for us to think clearly. Though it takes some machinery, humans reason about infinite quantities all the time and arrive at meaningful conclusions.

My intuitions strongly favor the dust speck scenario. Even if forget 3^^^^3 and just say that an infinite number of people will experience the speck, I'd still favor it over the torture.

Robin is absolutely wrong, because different instances of human suffering cannot be added together in any meaningful way. The cumulative effect when placed on one person is far greater than the sum of many tiny nuisances experienced by many. Whereas small irritants such as a dust mote do not cause "suffering" in any standard sense of the word, the sum total of those motes concentrated at one time and placed into one person's eye could cause serious injury or even blindness. Dispersing the dust (either over time or across many people) mitigates the effect. If the dispersion is sufficient, there is actually no suffering at all. To extend the example, you could divide the dust mote into even smaller particles, until each individual would not even be aware of the impact.

So the question becomes, would you rather live in a world with little or no suffering (caused by this particular event) or a world where one person suffers badly, and those around him or her sit idly by, even though they reap very little or no benefit from the situation?

The notion of shifting human suffering onto one unlucky individual so that the rest of society can avoid minor inconveniences is morally reprehensible. That (I hope) is why no one has stood up and shouted yeay for torture.

3Pablo10y
The problem with this claim is that you can construct a series of overlapping comparisons involving experiences that differ but slightly in how painful they are. Then, provided that the series has sufficiently many elements, you'll reach the conclusion that an experience of pain, no matter how intense, is preferable to arbitrarily many instances of the mildest pain imaginable. (Strictly speaking, you could actually avoid this conclusion by assuming that painful experiences of a given intensity have diminishing marginal value and that this value converges to a finite quantity. Then if the limiting value of a very mild pain is less than the value of a single extremely painful experience, the continuity argument wouldn't work. However, I see no independent motivation for embracing a theory of value of this sort. Moreover, such a theory would have incredible implications, e.g., that to determine how bad someone's pain is one needs to consider whether sentient beings have already experienced pains of that intensity in remote regions of spacetime.)
0shminux10y
Yeah, this is a common attempt to avoid this particular repugnant conclusion. This approach leads to conclusions like that a 3^^^3 mildly stabbed toes are better than a single moderately stabbed one. (Because if not, we can construct an unbroken chain of comparable pain experiences from specks to torture.) The motivation is there, to make dust specks and torture incomparable. Unfortunately, this approach doesn't work, as it results in infinitely many arbitrarily defined discontinuities.

The obvious answer is TORTURE, all else equal, and I'm pretty sure this is obvious to Eliezer too.

That is the straightforward utilitarian answer, without any question. However, it is not the common intuition, and even if Eliezer agrees with you he is evidently aware that the common intuition disagrees, because otherwise he would not bother blogging it. It's the contradiction between intuition and philosophical conclusion that makes it an interesting topic.

Robin's answer hinges on "all else being equal." That condition can tie up a lot of loose ends, it smooths over plenty of rough patches. But those ends unravel pretty quickly once you start to consider all the ways in which everything else is inherently unequal. I happen to think the dust speck is a 0 on the disutility meter, myself, and 3^^^3*0 disutilities = 0 disutility.

I believe that ideally speaking the best choice is the torture, but pragmatically, I think the dust speck answer can make more sense. Of course it is more intuitive morally, but I would go as far as saying that the utility can be higher for the dust specks situation (and thus our intuition is right). How? the problem is in this sentence: "If neither event is going to happen to you personally," the truth is that in the real world, we can't rely on this statement. Even if it is promised to us or made into a law, this type of statements often won't ... (read more)

0themusicgod110y
Your link is 404ing. Is http://spot.colorado.edu/~norcross/Comparingharms.pdf‎ the same one?
0Pablo10y
Here's the link (both links above are dead).
2ignoranceprior7y
Here's the latest working link (all three above are dead) Also, here's an archive in case that one ever breaks!

Robin, could you explain your reasoning. I'm curious.

Humans get barely noticeable "dust speck equivalent" events so often in their lives that the number of people in Eliezer's post is irrelevant; it's simply not going to change their lives, even if it's a gazillion lives, even with a number bigger than Eliezer's (even considering the "butterfly effect", you can't say if the dust speck is going to change them for the better or worse -- but with 50 years of torture, you know it's going to be for the worse).

Subjectively for these people, ... (read more)

@Robin,

"But even though there are 26 comments here, and many of them probably know in their hearts torture is the right choice, no one but me has said so yet."

I thought that Sebastian Hagen and I had said it. Or do you think we gave weasel answers? Mine was only contingent on my math being correct, and I thought his was similarly clear.

Perhaps I was unclear in a different way. By asking if the choice was repeatable, I didn't mean to dodge the question; I meant to make it more vivid. Moral questions are asked in a situation where many people a... (read more)

Hmm, thinking some more about this, I can see another angle (not the suffering angle, but the "being prudent about unintended consequences" angle):

If you had the choice between very very slightly changing the life of a huge number of people or changing a lot the life of only one person, the prudent choice might be to change the life of only one person (as horrible as that change might be).

Still, with the dust speck we can't really know if the net final outcome will be negative or positive. It might distract people who are about to have genius ide... (read more)

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

The square of the number of milliseconds in 50 years is about 10^21.

Would you rather one person tortured for a millisecond (then no ill effects), or that 3^^^3/10^21 people get a dust speck per second for 50 centuries?

OK, so the utility/effect doesn't scale when you change the times. But even if each 1% added dust/torture time made things ten times worse, when you reduce the dust-speckled population to reflect that it's still countless universes worth of people.

I'm with Tomhs. The question has less value as a moral dilemma than as an opportunity to recognize how we think when we "know" the answer. I intentionally did not read the comments last night so I could examine my own thought process, and tried very hard to hold an open mind (my instinct was dust). It's been a useful and interesting experience. Much better than the brain teasers which I can generally get because I'm on hightened alert when reading El's posts. Here being on alert simply allowed me to try to avoid immediately giving in to my bias.

Averaging utility works only when law of large numbers starts to play a role. It's a good general policy, as stuff subject to it happens all the time, enough to give sensible results over the human/civilization lifespan. So, if Eliezer's experiment is a singular event and similar events don't happen frequently enough, answer is 3^^^3 specks. Otherwise, torture (as in this case, similar frequent enough choices would lead to a tempest of specks in anyone's eye which is about 3^^^3 times worse then 50 years of torture, for each and every one of them).

Benquo, your first answer seems equivocal, and so did Sebastian's on a first reading, but now I see that it was not.

Torture,

Consider three possibilities:

(a) A dusk speck hits you with probability one, (b) You face an additional probability 1/( 3^^^3) of being tortured for 50 years, (c) You must blink your eyes for a fraction of a second, just long enough to prevent a dusk speck from hitting you in the eye.

Most people would pick (c) over (a). Yet, 1/( 3^^^3) is such a small number that by blinking your eyes one more time than you normally would you increase your chances of being captured by a sadist and tortured for 50 years by more than 1/( 3^^^3). Thus, (b) must be better than (c). Consequently, most people should prefer (b) to (a).

8timujin9y
You know, that actually persuaded me to override my intuitions and pick torture over dust specks.
3Jiro9y
You don't even have to go that far. Replace "dust specks" with "the inconvenience of not going outside the house" and "tiny chance of torture" with "tiny chance that being outside the house will lead to you getting killed".
1timujin9y
Yeah, I understood the point.

There isn't any right answer. Answers to what is good or bad is a matter of taste, to borrow from Nietzsche.

To me the example has messianic quality. One person suffers immensely to save others from suffering. Does the sense that there is a 'right' answer come from a Judeo-Christian sense of what is appropriate. Is this a sort of bias in line with biases towards expecting facts to conform to a story?

Also, this example suggests to me that the value pluralism of Cowen makes much more sense than some reductive approach that seeks to create one objective me... (read more)

Why is this a serious question? Given the physical unreality of the situation, the putative existence of 3^^^3 humans and the ability to actually create the option in the physical universe - why is this question taken seriously while something like is it better to kill Santa Claus or the Easter Bunny considered silly?

Fascinating, and scary, the extent to which we adhere to established models of moral reasoning despite the obvious inconsistencies. Someone here pointed out that the problem wasn't sufficiently defined, but then proceeded to offer examples of objective factors that would appear necessary to evaluation of a consequentialist solution. Robin seized upon the "obvious" answer that any significant amount of discomfort, over such a vast population, would easily dominate, with any conceivable scaling factor, the utilitarian value of the torture of a si... (read more)

The hardships experienced by a man tortured for 50 years cannot compare to a trivial experience massively shared by a large number of individuals -- even on the scale that Eli describes. There is no accumulation of experiences, and it cannot be conflated into a larger meta dust-in-the-eye experience; it has to be analyzed as a series of discreet experiences.

As for larger social implications, the negative consequence of so many dust specked eyes would be negligible.

Wow. People sure are coming up with interesting ways of avoiding the question.

Eliezer wrote "Wow. People sure are coming up with interesting ways of avoiding the question."

I posted earlier on what I consider the more interesting question of how to frame the problem in order to best approach a solution.

If I were to simply provide my "answer" to the problem, with the assumption that the dust in the eyes is likewise limited to 50 years, then I would argue that the dust is to be preferred to the torture, not on a utilitarian basis of relative weights of the consequences as specified, but on the bigger-picture view th... (read more)

Eliezer, are you suggesting that declining to make up one's mind in the face of a question that (1) we have excellent reason to mistrust our judgement about and (2) we have no actual need to have an answer to is somehow disreputable?

As for your link to the "motivated stopping" article, I don't quite see why declining to decide on this is any more "stopping" than choosing a definite one of the options. Or are you suggesting that it's an instance of motivated continuation? Perhaps it is, but (as you said in that article) the problem with ... (read more)

What happens if there aren't 3^^^3 instanced people to get dust specks? Do those specks carry over such that person #1 gets a 2nd speck and so on? If so, you would elect to have the person tortured for 50 years for surely the alternative is to fill our universe with dust and annihilate all cultures and life.

Robin, of course it's not obvious. It's only an obvious conclusion if the global utility function from the dust specks is an additive function of the individual utilities, and since we know that utility functions must be bounded to avoid Dutch books, we know that the global utility function cannot possibly be additive -- otherwise you could break the bound by choosing a large enough number of people (say, 3^^^3).


From a more metamathematical perspective, you can also question whether 3^^3 is a number at all. It's perfectly straightforward to construct a p... (read more)

2homunq12y
I once read the following story about a Russian mathematician. I can't find the source right now. Cast: Russian mathematician RM, other guy OG RM: "Truly large numbers don't really exist in the same sense that small ones do." OG: "That's ridiculous. Consider the powers of two. Does 2ˆ1 exist?"" RM: "Yes." OG: "OK, does 2ˆ2 exist?" RM: ".Yes." OG: "So you'd agree that 2ˆ3 exists?" RM: "...Yes." OG: "How about 2ˆ4?" RM: ".......Yes." OG: "So this is silly. Where would you ever draw the boundary?" RM: ".............................................................................................................................................."

Eliezer, are you suggesting that declining to make up one's mind in the face of a question that (1) we have excellent reason to mistrust our judgement about and (2) we have no actual need to have an answer to is somehow disreputable?

Yes, I am.

Regarding (1), we pretty much always have excellent reason to mistrust our judgments, and then we have to choose anyway; inaction is also a choice. The null plan is a plan. As Russell and Norvig put it, refusing to act is like refusing to allow time to pass.

Regarding (2), whenever a tester finds a user input that cr... (read more)

-5polymathwannabe10y

Fascinating question. No matter how small the negative utility in the dust speck, multiplying it with a number such as 3^^^3 will make it way worse than torture. Yet I find the obvious answer to be the dust speck one, for reasons similar to what others have pointed out - the negative utility rounds down to zero.

But that doesn't really solve the problem, for what if the harm in question was slightly larger? At what point does it cease rounding down? I have no meaningful criteria to give for that one. Obviously there must be a point where it does cease doing... (read more)

"Regarding (1), we pretty much always have excellent reason to mistrust our judgments, and then we have to choose anyway; inaction is also a choice. The null plan is a plan. As Russell and Norvig put it, refusing to act is like refusing to allow time to pass."

This goes to the crux of the matter, why to the extent the future is uncertain, it is better to decide based on principles (representing wisdom encoded via evolutionary processes over time) rather than on the flat basis of expected consequences.

Would you condemn one person to be horribly tortured for fifty years without hope or rest, to save every qualia-experiencing being who will ever exist one blink?

Is the question significantly changed by this rephrasing? It makes SPECKS the default choice, and it changes 3^^^3 to "all." Are we better able to process "all" than 3^^^3, or can we really process "all" at all? Does it change your answer if we switch the default?

Would you force every qualia-experiencing being who will ever exist to blink one additional time to save one person from being horribly tortured for fifty years without hope or rest?

> For those who would pick TORTURE, what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

If you mean would I condemn all conscious beings to a googolplex of torture to avoid universal annihilation from a big "dust crunch" my answer is still probably yes. The alternative is universal doom. At least the tortured masses might have some small chance of finding a solution to their problem at some point. Or at least a googolplex years might pass leaving some future civilization free to prosper. ... (read more)

> Would you condemn one person to be horribly tortured for fifty years without hope or rest, to save every qualia-experiencing being who will ever exist one blink?

That's assuming you're interpreting the question correctly. That you aren't dealing with an evil genie.

You never said we couldn't choose who specifically gets tortured, so I'm assuming we can make that selection. Given that, the once agonizingly difficult choice is made trivially simple. I would choose 50 years of torture for the person who made me make this decision.

Since I chose the specks -- no, I probably wouldn't pay a penny; avoiding the speck is not even worth the effort to decide to pay the penny or not. I would barely notice it; it's too insignificant to be worth paying even a tiny sum to avoid.

I suppose I too am "rounding down to zero"; a more significant harm would result in a different answer.

-1phob14y
You're avoiding the question. What if a penny was automatically payed for you each time in the future to avoid dust specks floating in your eye? The question is whether the dust speck is worth at least a negative penny of disutility. For me, I would say yes.

"For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?"

To avoid all the dust specks, yeah, I'd pay a penny and more. Not a penny per speck, though ;)

The reason is to avoid having to deal with the "unintended consequences" of being responsible for that very very small change over such a large number of people. It's bound to have some significant indirect consequences, both positive and negative, on the far edges of the bell curve... the net impact could be negative, and a penny is little to pay to avoid responsibility for that possibility.

The first thing I thought when I read this question was that the dust specks were obviously preferable. Then I remembered that my intuition likes to round 3^^^3 down to something around twenty. Obviously, the dust specks are preferable to the torture for any number at all that I have any sort of intuitive grasp over.

But I found an argument that pretty much convinced me that the torture was the correct answer.

Suppose that instead of making this choice once, you will be faced with the same choice 10^17 times for the next fifty years (This number was chosen... (read more)

-3aausch14y
The reasoning here seems very broken to me (I have no opinion on the conclusion yet): Look at a version of the reverse dial. Say that you start with 3^^^3 people having 1000000 dust-specks a second rubbed in their eye, and 0 people tortured. Each time you turn the dial up by 1, 1 person is moved over from the "speck in the eye" list over to the "tortured for 50 years" list, and the frequency is reduced by 1 spec/second. Would you turn the dial up to 1000000?
0phob14y
So because there is a continuum between the right answer (lots of torture) and the wrong answer (3^^^3 horribly blinded people), you would rather blind those people?
4Manfred13y
Nah, he was pretty clearly challenging the use of induction in the above post. The larger problem is assuming linearity in an obviously nonlinear situation - this also explains why the induction appears to work either way. Applying 1 pound of force to someone's kneecap is simply not 1/10th as bad as applying 10 pounds of force to someone's kneecap.
4XiXiDu13y
This has nothing to do with the original question. You rephrased it so that it now asks if you'd rather torture one person or 3^^^3. Of course you rather torture one person than 3^^^3. That does not equal torturing one person or that 3^^^3 people get dust specks in their eyes for a fraction of a second.

"... whenever a tester finds a user input that crashes your program, it is always bad - it reveals a flaw in the code - even if it's not a user input that would plausibly occur; you're still supposed to fix it. "Would you kill Santa Claus or the Easter Bunny?" is an important question if and only if you have trouble deciding. I'd definitely kill the Easter Bunny, by the way, so I don't think it's an important question."

I write code for a living; I do not claim that it crashes the program. Rather the answer is irrelevant as I don't thin... (read more)

By "pay a penny to avoid the dust specks" I meant "avoid all dust specks", not just one dust speck. Obviously for one speck I'd rather have the penny.

2phob14y
So if someone would pay a penny, they should pick torture if it were 3^^^^3 people getting dust specks, which makes it suspect that they understood 3^^^3 in the first place.

what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

To reduce suffering in general rather than your own (it would be tough to live with), bring on the coddling grinders. (10^10^100)^2 is a joke next to 3^^^3.

Having said that, it depends on the qualia-experiencing population of all existence compared to the numbers affected, and whether you change existing lives or make new ones. If only a few googolplex-squared people-years exist anyway, I vote dust.

I also vote to kill the bunny.

For those who would pick TORTURE, what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

Torture, again. From the perspective of each affected individual, the choice becomes:

1.) A (10(10100))/(3^^^3) chance of being tortured for (10(10100)) years.
2.) A 1 chance of a dust speck.
(or very slightly different numbers if the (10(10100)) people exist in addition to the 3^^^3 people; the difference is too small to be noticable)

I'd still take the former. (10(10100))/(3^^^3) is still so close to zero that there'... (read more)

Eliezer, it's the combination of (1) totally untrustworthy brain machinery and (2) no immediate need to make a choice that I'm suggesting means that withholding judgement is reasonable. I completely agree that you've found a bug; congratulations, you may file a bug report and add it to the many other bug reports already on file; but how do you get from there to the conclusion that the right thing to do is to make a choice between these two options?

When I read the question, I didn't go into a coma or become psychotic. I didn't even join a crazy religion or ... (read more)

Let's suppose we measure pain in pain points (pp). Any event which can cause pain is given a value in [0, 1], with 0 being no pain and 1 being the maximum amount of pain perceivable. To calculate the pp of an event, assign a value to the pain, say p, and then multiply it by the number of people who will experience the pain, n. So for the torture case, assume p = 1, then:

torture: 1*1 = 1 pp

For the spec in eye case, suppose it causes the least amount of pain greater than no pain possible. Denote this by e. Assume that the dust speck causes e amount of ... (read more)

"Wow. People sure are coming up with interesting ways of avoiding the question."

My response was a real request for information- if this is a pure utility test, I would select the dust specks. If this were done to a complex, functioning society, adding dust specks into everyone's eyes would disrupt a great deal of important stuff- someone would almost certainly get killed in an accident due to the distraction, even on a planet with only 10^15 people and not 3^^^^3.

Eliezer, in your response to g, are you suggesting that we should strive to ensure that our probability distribution over possible beliefs sum to 1? If so, I disagree: I don't think this can be considered a plausible requirement for rationality. When you have no information about the distribution, you ought to assign probabilities uniformly, according to Laplace's principle of indifference. But the principle of indifference only works for distributions over finite sets. So for infinite sets you have to make an arbitrary choice of distribution, which violates indifference.

"For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?"

Yes. Note that, for the obvious next question, I cannot think of an amount of money large enough such that I would rather keep it than use it to save a person from torture. Assuming that this is post-Singularity money which I cannot spend on other life-saving or torture-stopping efforts.

"You probably wouldn't blind everyone on earth to save that one person from being tortured, and yet, there are (3^^^3)/(10^17) >> 7*10^9 people being blinded for ea... (read more)

3phob14y
People are being tortured, and it wouldn't take too much money to prevent some of it. Obviously, there is already a price on torture.

My algorithm goes like this:
there are two variables, X and Y.
Adding a single additional dust speck to a person's eye over their entire lifetime increases X by 1 for every person this happens to.
A person being tortured for a few minutes increases Y by 1.

I would object to most situations where Y is greater than 1. But I have no preferences at all with regard to X.

See? Dust specks and torture are not the same. I do not lump them together as "disutility". To do so seems to me a preposterous oversimplification. In any case, it has to be argued that... (read more)

I am not convinced that this question can be converted into a personal choice where you face the decision of whether to take the speck or a 1/3^^^3 chance of being tortured. I would avoid the speck and take my chances with torture, and I think that is indeed an obvious choice.

I think a more apposite application of that translation might be:
If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day, I would always choose the speck, because I would never want to endure the inevitable 50 years of torture.

The difference is that framing the question as a one-off individual choice obscures the fact that in the example proffered, the torture is a certainty.

1/3^^^3 chance of being tortured... If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day, I would always choose the speck, because I would never want to endure the inevitable 50 years of torture.

That wouldn't make it inevitable. You could get away with it, but then you could get multiple tortures. Rolling 6 dice often won't get exactly one "1".

Tom McCabe wrote:
The probability is effectively much greater than that, because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)). abs(U(torture)) would also get reduced, but by a much smaller factor, because the number is much smaller to begin with.

Is there something wrong with viewing this from the perspective of the affected individuals (unique or not)? For any individual instance of a person, the probability of directly experiencing the tortu... (read more)

Answer depends on the person's POV on consciousness.

because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)).

If so, I want my anti-wish back. Evil Genie never said anything about compression. No wonder he has so many people to dust. I'm complaining to GOD Over Djinn.

If they're not compressed, surely a copy will still experience qualia? Does it matter that it's identical to another? If the sum experience of many copies is weighted as if there was just one, then I'm officially converting from infinite set agnostic to infinite set atheist.

Bayesianism, Infinite Decisions, and Binding replies to Vann McGee's "An airtight dutch book", defending the permissibility of an unbounded utility function.

An option that dominates in finite cases will always provably be part of the maximal option in finite problems; but in infinite problems, where there is no maximal option, the dominance of the option for the infinite case does not follow from its dominance in all finite cases.

If you allow a discontinuity where the utility of the infinite case is not the same as the limit of the utilities of t... (read more)

It is clearly not so easy to have a non-subjective determination of utility.
After some thought I pick the torture. That is because the concept of 3^^^3 people means that no evolution will occur while that many people live. The one advantage to death is that it allows for evolution. It seems likely that we will have evovled into much more interesting life forms long before 3^^^3 of us have passed.
What's the utility of that?

Recovering Irrationalist:
True: my expected value would be 50 years of torture, but I don't think that changes my argument much.

Sebastian:
I'm not sure I understand what you're trying to say. (50*365)/3^^^3 (which is basically the same thing as 1/3^^^3) days of torture wouldn't be anything at all, because it wouldn't be noticeable. I don't think you can divide time to that extent from the point of view of human consciousness.

I don't think the math in my personal utility-estimation algorithm works out significantly differently depending on which of the cas... (read more)

I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.

Some comments:

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant. In other words it has to be effectively flat. And I doubt they would have said anything different if I'd said 3^^^^3.

If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person. The only person who could be "acclimating" to 3^^^3 is you, a bystander who is insensitive to the inconceivably vast scope.

Scope insensitivity - extremely sublinear aggregation by individuals considering bad events happening to many people - can lead to mass defection in a multiplayer prisoner's dilemma even by altruists who would normally cooperate. Suppose I can go skydiving today but this causes the world to get warmer by 0.000001 degree Celsius... (read more)

3MrHen14y
I will admit, that was a pretty awesome lesson to learn. Marcello's reasoning had it click in my head but the kicker that drove the point home was scaling it to 3^^^^3 instead of 3^^^3.
2Aharon13y
I think I understand why one should derive the conclusion to torture one person, given these premises. What I don't understand is the premises. In the article about scope insensitivity you linked to, it was very clear that the scope of things made things worse. I don't understand why it should be wrong to round down the dust speck, or similar very small disutilities, to zero - Basically, what Scott Clark said: 3^^^3*0 disutilities = 0 disutility.

Rounding to zero is odd. In the absence of other considerations, you have no preference whether or not people get a dust speck in their eye?

It is also in violation of the structure of the thought experiment - a dust speck was chosen as the least bad bad thing that can happen to someone. If you would round it to zero, then you need to choose slightly worse thing - I can't imagine your intuitions will be any less shocked by preferring torture to that slightly worse thing.

5lessdazed12y
That was a mistake, since so many people round it to zero.
7dlthomas12y
It seems to have been. Since the criteria for the choice was laid out explicitly, though, I would have hoped that more people would notice that the thought experiment they solved so easily was not actually the one they had been given, and perform the necessary adjustment. This is obviously too optimistic - but perhaps can serve itself as some kind of lesson about reasoning.
2Aharon12y
I conceed that it is reasonable within the constraints of the thought experiment. However, I think it should be noted that this will never be more than a thought experiment and that if real world numbers and real world problems are used, it becomes less clear cut, and the intuition of going against the 50 years torture is a good starting point in some cases.
0coldlyrationalogic9y
It's odd. If you think about it, Eliezer's Argument is absolutely correct. But it seems rather unintuitive even though I KNOW it's right. We humans are a bit silly sometimes. On the other hand, we did manage to figure this out, so it's not that bad.
1TAG1y
"In the absence of other considerations, you have no preference whether or not people get a dust speck in their eye?" I can regard the moral significance as zero. I don't have to take the view that morality "is" preferences, of any kind or degree. Excessive demandingness is a famous problem with utiltarianism: rounding down helps to curtail it.

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant.

Sum(1/n^2, 1, 3^^^3) < Sum(1/n^2, 1, inf) = (pi^2)/6

So an algorithm like, "order utilities from least to greatest, then sum with a weight if 1/n^2, where n is their position in the list" could pick dust specks over torture while recommending most people not go sky diving (as their benefit is outweighed by the detriment to those less fortunate).

This would mean that scope insensitivity, beyond a certain point, is a feature of our morality rather than a bias; I am not sure my opinion of this outcome.

That said, while giving an answer to the one problem that some seem more comfortable with, and to the second that everyone agrees on, I expect there are clear failure modes I haven't thought of.

Edited to add:

This of course holds for weights of 1/n^a for any a>1; the most convincing defeat of this proposition would be showing that weights of 1/n (or 1/(n log(n))) drop off quickly enough to lead to bad behavior.

1dlthomas12y
On recently encountering the wikipedia page on Utility Monsters and thence to the Mere Addition Paradox, it occurs to me that this seems to neatly defang both. Edited - rather, completely defangs the Mere Addition Paradox, may or may not completely defang Utility Monsters depending on details but at least reduces their impact.
3private_messaging11y
And why should they consider 3^^^^3 differently, if their function asymptotically approaches a limit? Besides, human utility function would take the whole, and then perhaps consider duplicates, uniqueness (you don't want your prehistoric tribe to lose the last man who knows how to make a stone axe), and so on, rather than evaluate one by one and then sum. The false allure of oversimplified morality is in ease of inventing hypothetical examples where it works great. One could, of course, posit a colder planet. Most of the population would prefer that planet to be warmer, but if the temperature rise exceeds 5 Celsius, the gas hydrates melt, and everyone dies. And they all have to decide at one day. Or one could posit a planet Linearium populated entirely by people that really love skydiving, who would want to skydive everyday but that would raise the global temperature by 100 Celsius, and they'd rather be alive than skydive every day and boil to death. They opt to skydive at their birthdays at the expense of 0.3 degree global temperature rise, which each one of them finds to be an acceptable price to pay for getting to skydive at your birthday.
0[anonymous]9y
But still, WHY is torture better? What is even the problem with the speck dusts? Some of the people who get speck dust in their eyes will die in accidents caused by the dust particles? Is this why speck dust is so bad? But then, have we considered the fact that speck dust may save an equal amount of people, who would otherwise die? I really don´t get it and it bothers me alot.
3Roho9y
Okeymaker, I think the argument is this: Torturing one person for 50 years is better than torturing 10 persons for 40 years. Torturing 10 persons for 40 years is better than torturing 1000 persons for 10 year. Torturing 1000 persons for 10 years is better than torturing 1000000 persons for 1 year. Torturing 10^6 persons for 1 year is better than torturing 10^9 persons for 1 month. Torturing 10^9 persons for 1 month is better than torturing 10^12 persons for 1 week. Torturing 10^12 persons for 1 week is better than torturing 10^15 persons for 1 day. Torturing 10^15 persons for 1 day is better than torturing 10^18 persons for 1 hour. Torturing 10^18 persons for 1 hour is better than torturing 10^21 persons for 1 minute. Torturing 10^21 persons for 1 minute is better than torturing 10^30 persons for 1 second. Torturing 10^30 persons for 1 second is better than torturing 10^100 persons for 1 millisecond. Torturing for 1 millisecond is exactly what a dust speck does. And if you disagree with the numbers, you can add a few millions. There is still plenty of space between 10^100 and 3^^^3.
1[anonymous]9y
Yes, if this is the case (would be nice if Eliezer confirmed it) I can see where the logic halts from my perspective :) Explanatory example if someone care: I disagree. From my moral standpoint AND from my utility function whereas I am a bystander and perceive all humans as a cooperating system and want to minimize the damages to it, I think that it is better for 10^30 persons to put up with 1 second of intense pain compared to a single one who have to survive a whole minute. It is much, much more easy to recover from one second of pain than from being tortured for a minute. And spec dust is virtually harmless. The potential harm it may cause should at least POSSIBLY be outweighted by the benefits, e.g. someone not being run over by a car because he stopped and scratched his eye.
4Roho9y
Okay, so let's zoom in here. What is preferable? Torturing 1 person for 60 seconds Torturing 100 person for 59 seconds Torturing 10000 person for 58 seconds etc. Kind of a paradox of the heap. How many seconds of torture are still torture? And 10^30 is really a lot of people. That's what Eliezer meant with "scope insensitivity". And all of them would be really grateful if you spared them their second of pain. Could be worth a minute of pain?
2Jiro9y
That's fighting the hypothetical. Assume that the speck is such that the harm caused by the spec slightly outweighs the benefits.
0[anonymous]9y
Or the benefits could slightly outweigh the harm. You have to treat this option as a net win of 0 then, because you have no more info to go on so the probs. are 50/50. Option A: Torture. Net win is negativ. Option B: Spec dust. Net win is zero. Make you choice.
5Quill_McGee9y
In the Least Convenient Possible World of this hypothetical, every dust speck causes a constant small amount of harm with no knock-on effects(no avoiding buses, no crashing cars...)
0private_messaging9y
I thought the original point was to focus just on the inconvenience of the dust, rather than simply propositioning that out of 3^^^3 people who were dustspecked, one person would've gotten something worse than 50 years of torture as a consequence of the dust speck. The latter is not even an ethical dilemma, it's merely an (entirely baseless but somewhat plausible) assertion about the consequences of dust specks in the eyes.
0Quill_McGee9y
exactly! No knock-on effects. Perhaps you meant to comment on the grandparent(great-grandparent? do I measure from this post or your post?) instead?
0private_messaging9y
yeah, clicked wrong button.
5private_messaging9y
Torturing a person for 1 millisecond is not necessarily even a possibility. It doesn't make any sense whatsoever; in 1 millisecond no interesting feedback loops can even close. If we accept that torture is some class of computational processes that we wish to avoid, the badness definitely could be eating up your 3^^^3s in one way or the other. We have absolutely zero reason to expect linearity when some (however unknown) properties of a set of computations are involvd. And the computational processes are not infinitely divisible into smaller lengths of time.
3TomStocker9y
Agree, having lived in chronic pain supposedly worse than untrained childbirth, I'd say that even an hour has a really seriously different possibility in terms of capacity for suffering than a day, and a day different from a week. For me it breaks down somewhere, even when multiplying between the 10^15 for 1 day and 10^21 for one minute. You can't really feel THAT much pain in a minute that is comparable to a day, even orders of magnitude? Its just qualitatively different. Interested to hear pushback on this
4Kindly9y
We could go from a day to a minute more slowly; for example, by increasing the number of people by a factor of a googolplex every time the torture time decreases by 1 second. I absolutely agree that the length of torture increases how bad it is in nonlinear ways, but this doesn't mean we can't find exponential factors that dominate it at every point at least along the "less than 50 years" range.
1private_messaging9y
That strikes me as a deliberate set up for a continuum fallacy. Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway? I'd much prefer to have a [large number of exact copies of me] experience 1 second of headache than for one me to suffer it for a whole day. Because those copies they don't have any mechanism which could compound their suffering. They aren't even different subjectivities. I don't see any reason why a hypothetical mind upload of me running on multiple redundant hardware should be an utility monster, if it can't even tell subjectively how redundant it's hardware is. Some anaesthetics do something similar, preventing any new long term memories, people have no problem with taking those for surgery. Something's still experiencing pain but it's not compounding into anything really bad (unless the drugs fail to work, or unless some form of long term memory still works). A real example of a very strong preference for N independent experiences of 30 seconds of pain over 1 experience of 30*N seconds of pain.
1Kindly9y
It's not a continuum fallacy because I would accept "There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don't know the exact values of N and T" as an answer. If, on the other hand, the comparison goes the other way for any values of N and T, then you have to accept the transitive closure of those comparisons as well. I'm not sure what you mean by this. I don't believe in linearity of suffering: that would be the claim that 2 people tortured for 1 day is the same as 1 person tortured for 2 days, and that's ridiculous. I believe in comparability of suffering, which is the claim that for some value of N, N people tortured for 1 day is worse than 1 person tortured for 2 days. Regarding anaesthetics: I would prefer a memory inhibitor for a painful surgery to the absence of one, but I would still strongly prefer to feel less pain during the surgery even if I know I will not remember it one way or the other. Is this preference unusual?
2[anonymous]9y
This is where the argument for choosing torture falls apart for me, really. I don't think there is any number of people getting dust specks in their eyes that would be worse than torturing one person for fifty years. I have to assume my utility function over other people is asymptotic; the amount of disutility of choosing to let even an infinity of people get dust specks in their eyes is still less than the disutility of one person getting tortured for fifty years. I think he's questioning the idea that two people getting dust specks in their eyes is twice the disutility of one person getting dust specks, and that is the linearity he's referring to. Personally, I think the problem stems from dust specks being such a minor inconvenience that it's basically below the noise threshold. I'd almost be indifferent between choosing for nothing to happen or choosing for everyone on Earth to get dust specks (assuming they don't cause crashes or anything).
7SimonJester239y
There's the question of linearity- but if you use big enough numbers you can brute force any nonlinear relationship, as Yudkowsky correctly pointed out some years ago. Take Kindly's statement: "There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don't know the exact values of N and T" We can imagine a world where this statement is true (probably for a value of T really close to 1). And we can imagine knowing the correct values of N and T in that world. But even then, if a critical condition is met, it will be true that "For all values of N, and for all T>1, there exists a value of A such that torturing N people for T seconds is better than torturing A*N people for T-1 seconds." Sure, the value of A may be larger than 10^100... But then, 3^^^3 is already vastly larger than 10^100. And if it weren't big enough we could just throw a bigger number at the problem; there is no upper bound on the size of conceivable real numbers. So if we grant the critical condition in question, as Yudkowsky does/did in the original post... Well, you basically have to concede that "torture" wins the argument, because even if you say that [hugenumber] of dust specks does not equate to a half-century of torture, that is NOT you winning the argument. That is just you trying to bid up the price of half a century of torture. The critical condition that must be met here is simple, and is an underlying assumption of Yudkowsky's original post: All forms of suffering and inconvenience are represented by some real number quantity, with commensurate units to all other forms of suffering and inconvenience. In other words, the "torture one person rather than allow 3^^^3 dust specks" wins, quite predictably, if and only if it is true that that the 'pain' component of the utility function is measured in one and only one dimension. So the question is, basically, do you measure your utility function in terms of a sing
2SimonJester239y
It occurred to me to add something to my previous comments about the idea of harm being nonlinear, or something that we compute in multiple dimensions that are not commensurate. One is that any deontological system of ethics automatically has at least two dimensions. One for general-purpose "utilons," and one for... call them "red flags." As soon as you accumulate even one red flag you are doing something capital-w Wrong in that system of ethics, regardless of the number of utilons you've accumulated. The main argument justifying this is, of course, that you may think you have found a clever way to accumulate 3^^^3 utilons in exchange for a trivial amount of harm (torture ONLY one scapegoat!)... but the overall weighted average of all human moral reasoning suggests that people who think they've done this are usually wrong. Therefore, best to red-flag such methods, because they usually only sound clever. Obviously, one may need to take this argument with a grain of salt, or 3^^^3 grains of salt. It depends on how strongly you feel bound to honor conclusions drawn by looking at the weighted average of past human decision-making. ---------------------------------------- The other observation that occurred to me is unrelated. It is about the idea of harm being nonlinear, which as I noted above is just plain not enough to invalidate the torture/specks argument by itself due to the ability to keep thwacking a nonlinear relationship with bigger numbers until it collapses. Take as a thought-experiment an alternate Earth where, in the year 1000, population growth has stabilized at an equilibrium level, and will rise back to that equilibrium level in response to sudden population decrease. The equilibrium level is assumed to be stable in and of itself. Imagine aliens arriving and killing 50% of all humans, chosen apparently at random. Then they wait until the population has returned to equilibrium (say, 150 years) and do it again. Then they repeat the process twice mor
0private_messaging9y
I mentioned duplication. That in 3^^^3 people, most have to be exact duplicates of one another birth to death. In your extinction example, once you have substantially more than the breeding population, extra people duplicate some aspects of your population (ability to breed) which causes you to find it less bad. Not every non-linear relationship can be thwacked with bigger and bigger numbers...
0private_messaging9y
For one thing N=1 T=1 trivially satisfies your condition... I mean, suppose that you got yourself a function that takes in a description of what's going on in a region of spacetime, and it spits out a real number of how bad it is. Now, that function can do all sorts of perfectly reasonable things that could make it asymptotic for large numbers of people, for example it could be counting distinct subjective experiences in there (otherwise a mind upload on very multiple redundant hardware is an utility monster, despite having an identical subjective experience to same upload running one time. That's much sillier than the usual utility monster, which feels much stronger feelings). This would impose a finite limit (for brains of finite complexity). One thing that function can't do, is to have a general property that f(a union b)=f(a)+f(b) , because then we just subdivide our space into individual atoms none of which are feeling anything.
0Kindly9y
Obviously I only meant to consider values of T and N that actually occur in the argument we were both talking about.
0private_messaging9y
Well I'm not sure what's the point then. What you're trying to induct from it.
2TomStocker9y
Obviously. Just important to remember that extremity of suffering is something we frequently fail to think well about.
6Kindly9y
Absolutely. We're bad at anything that we can't easily imagine. Probably, for many people, intuition for "torture vs. dust specks" imagines a guy with a broken arm on one side, and a hundred people saying 'ow' on the other. The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn't take the number of people saved by an intervention into account; we just picture the typical effect on a single person. What, I wonder, are the consequence of our poor imagination for extremity of suffering? For me, the prison system comes to mind: I don't know how bad being in prison is, but it probably becomes much worse than I imagine if you're there for 50 years, and we don't think about that at all when arguing (or voting) about prison sentences.
7dxu9y
My heuristic for dealing with such situations is somewhat reminiscent of Hofstadter's Law: however bad you imagine it to be, it's worse than that, even when you take the preceding statement into account. In principle, this recursion should go on forever and lead to you regarding any sufficiently unimaginably bad situation as infinitely bad, but in practice, I've yet to have it overflow, probably because your judgment spontaneously regresses back to your original (inaccurate) representation of the situation unless consciously corrected for.
2Lumifer9y
Obligatory xkcd.
3Nornagest9y
That would have been a better comic without the commentary in the last panel.
-1Lumifer9y
But the alt text is great X-)
1TomStocker9y
My feeling is that situations like being caught for doing something horrendous might or might not be subject to psychological adjustment - that many situations of suffering are subject to psychological adjustment and so might actually be not as bad as we though. But chronic intense pain, is literally unadjustable to some degree - you can adjust to being in intense suffering but that doesn't make the intense suffering go away. That's why I think its a special class of states of being - one that invokes action. What do people think?
4dxu9y
Okay, here's a new argument for you (originally proposed by James Miller, and which I have yet to see adequately addressed): assume that you live on a planet with a population of 3^^^3 distinct people. (The "planet" part is obviously not possible, and the "distinct" part may or may not be possible, but for the purposes of a discussion about morality, it's fine to assume these.) Now let's suppose that you are given a choice: (a) everyone on the planet can get a dust speck in the eye right now, or (b) the entire planet holds a lottery, and the one person who "wins" (or "loses", more accurately) will be tortured for 50 years. Which would you choose? If you are against torture (as you seem to be, from your comment), you will presumably choose (a). But now let's suppose you are allowed to blink just before the dust speck enters your eye. Call this choice (c). Seeing as you probably prefer not having a dust speck in your eye to having one in your eye, you will most likely prefer (c) to (a). However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3. But since the lottery proposed in (b) only offers a 1/3^^^3 probability of being picked for the torture, (b) is preferable to (c). Then, by the transitivity axiom, if you prefer (c) to (a) and (b) to (c), you must prefer (b) to (a). Q.E.D.
0EHeller9y
This seems pretty unlikely to be true.
0dxu9y
I think you underestimate the magnitude of 3^^^3 (and thereby overestimate the magnitude of 1/3^^^3).
0EHeller9y
Both numbers seem basically arbitrarily small (probability 0). Since the planet has so many distinct people, and they blink more than once a day, you are essentially asserting that on that planet, multiple people are kidnapped and tortured for more than 50 years several times a day.
1dxu9y
Well, I mean, obviously a single person can't be kidnapped more than once every 50 years (assuming that's how long each torture session lasts), and certainly not several times a day, since he/she wouldn't have finished being tortured quickly enough to be kidnapped again. But yes, the general sentiment of your comment is correct, I'd say. The prospect of a planet with daily kidnappings and 50-year-long torture sessions may seem strange, but that sort of thing is just what you get when you have a population count of 3^^^3.
-2EHeller9y
I worked it out back of the envelope, and the probability of being kidnapped when you blink is only 1/5^^^5.
1dxu9y
Well, now I know you're underestimating how big 3^^^3 is (and 5^^^5, too). But let's say somehow you're right, and the probability really is 1/5^^^5. All I have to do is modify the thought experiment so that the planet has 5^^^5 people instead of 3^^^3. There, problem solved. So, new question: would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 5^^^5 people get dust specks in their eyes?
0Epictetus9y
And the time spent setting up a lottery and carrying out the drawing also increases the probability that someone else gets captured and tortured in the intervening time, far more than blinking would. In fact, the probability goes up anyway in that fraction of a second, whether you blink or not. You can't stop time, so there's no reason to prefer (c) to (b).
2dxu9y
Ah, sorry; I wasn't clear. What I meant was that blinking increases your probability of being tortured beyond the normal "baseline" probability of torture. Obviously, even if you don't blink, there's still a probability of you being tortured. My claim is that blinking affects the probability of being tortured so that the probability is higher than it would be if you hadn't blinked (since you can't see for a fraction of a second while blinking, leaving you ever-so-slightly more vulnerable than you would be with your eyes open), and moreover that it would increase by more than 1/3^^^3. So basically what I'm saying is that P(torture|blink) > P(torture|~blink) + 1/3^^^3.
1Epictetus9y
Let me see if I get this straight: The choice comes down to dust specks at time T or dust specks at time T + dT, where the interval dT allows you time to blink. The argument is that in the interval dT, the probability of being captured and tortured increases by an amount greater than your odds in the lottery. It seems to me that the blinking is immaterial. If the question were whether to hold the lottery today or put dust in everyone's eyes tomorrow, the argument should be unchanged. It appears to hinge on the notion that as time increases, so do the odds of something bad happening, and therefore you'd prefer to be in the present instead of the future. The problem I have is that the future is going to happen anyway. Once the interval dT passes, the odds of someone being captured in that time will go up regardless of whether you chose the lottery or not.
1dxu9y
If I told you that a dust speck was about to float into your left eye in the next second, would you (a) take it full in the eye, or (b) blink to keep it out? If you say you would blink, you are implicitly acknowledging that you prefer not getting specked to getting specked, and thereby conceding that getting specked is worse than not getting specked. If you would take it full in the eye, well... you're weird.
3[anonymous]9y
It's not (necessarily) about dust specks accidentally leading to major accidents. But if you think that having a dust speck in your eye may be even slightly annoying (whether you consciously know that or not), the cost you have from having it fly into your eye is not zero. Now something not zero multiplied by a sufficiently large number will necessarily be larger than the cost of one human being's life in torture.
3[anonymous]9y
Now you are getting it copletely wrong. You can´t add up harm on spec dust if it is happening to different people. Every individual has a capability to recover from it. Think about it. With that logic it is worse to rip a hair from every living being in the universe than to nuke New York. If people in charge reasoned that way we might have harmageddon in no time.
1helltank9y
That's ridiculous. So mild pains don't count if they're done to many different people? Let's give a more obvious example. It's better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less. Scaling down, we can say that it's better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less. Keep repeating this in your head(see how consistent it feels, how it makes sense). Now just extrapolate to the instance that it's better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn't good enough because pain.[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]. The math doesn't add up in your straw man example, unlike with the actual example given. As a side note, you are also appealing to consequences.
1dxu9y
I think Okeymaker was actually referring to all the people in the universe. While the number of "people" in the universe (defining a "person" as a conscious mind) isn't a known number, let's do as blossom does and assume Okeymaker was referring to the Level I multiverse. In that case, the calculation isn't nearly as clear-cut. (That being said, if I were considering a hypothetical like that, I would simply modus ponens Okeymaker's modus tollens and reply that I would prefer to nuke New York.)
2[anonymous]9y
If 1. Each human death has only finite cost. We sure act this way in our everyday lives, exchanging human lives for the convenience of driving around with cars etc. 2. By our universe you do not mean only the observable universe, but include the level I multiverse then yes, that is the whole point. A tiny amount of suffering multiplied by a sufficiently large number obviously is eventually larger than the fixed cost of nuking New York. Unless you can tell my why my model for the costs of suffering distributed over multiple people is wrong, I don't see why I should change it. "I don't like the conclusions!!!" is not a valid objection. If they ever justifiable start to reason that way, i.e. if they actually have the power to rip a hair from every living human being, I think we'll have larger problems than the potential nuking of New York.
0[anonymous]9y
Okey, I was trying to learn from this post but now I see that I have to try to explain stuff myself in order for this communication to become useful. When It comes to pain it is hard to explain why one person´s great suffering is worse than many suffering very very little if you don´t understand it by yourself. So let us change the currency from pain to money. Let´s say that you and me need to fund a large plantage of algae in order to let the Earth´s population escape starvation due to lack of food. This project is of great importence for the whole world so we can force anyone to become a sponsor and this is good because we need the money FAST. We work for the whole world (read: Earth) and we want to minimze the damages from our actions. This project is really expensive however... Should we: a) Take one dollar from every person around the world with a minimum wage that can still afford house, food etc. even if we take that one dollar? or should we b) Take all the money (instantly) from Denmark and watch it break down in bakruptcy? Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.
1[anonymous]9y
In this case I do not disagree with you. The number of people on earth is simply not large enough. But if you asked me whether to take money from 3^^^3 people compared to throwing Denmark into bankruptcy, I would choose the latter. Math should override intuition. So unless you give me a model that you can convince me of that is more reasonable than adding up costs/utilities, I don't think you will change my mind.
2[anonymous]9y
Now I see what is fundamentally wrong with the article and you´re reasoning from MY perspective. You don´t seem to understand the difference between a permanent sacriface and a temporary. If we subsitute the spec dust with index fingers for example, I agree that it is reasonable to think that killing one person is far better than to have 3 billion (we don´t need 3^^^3 for this one) persons lose their index fingers. Because that is a permanent sacriface. At least for now we can´t have fingers grow out just like that. To get dust in your eye at the other hand, is only temporary. You will get over it real quick and forget all about it. But 50 years of torture is something that you will never fully heal from and it will ruin a persons life and cause permanent damage.
2Jiro9y
The trouble is that there is a continuous sequence from Take $1 from everyone Take $1.01 from almost everyone Take $1.02 from almost almost everyone ... Take a lot of money from very few people (Denmark) If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly. You will have to say, for instance, taking $20 each from 1/20 the population of the world is good, but taking $20.01 each from slightly less than 1/10 the population of the world is bad. Can you say that?
-1Lumifer9y
If you think that 100C water is hot and 0C water is cold, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.
0dxu9y
No, because temperature is (very close to) a continuum, whereas good/bad is a binary. To see this more clearly, you can replace the question, "Is this action good or bad?" to "Would an omniscient, moral person choose to take this action?", and you can instantly see the answer can only be "yes" (good) or "no" (bad). (Of course, it's not always clear which choice the answer is--hence why so many argue over it--but the answer has to be, in principle, either "yes" or "no".)
3Lumifer9y
First, I'm not talking about temperature, but about categories "hot" and "cold". Second, why in the world would good/bad be binary? I have no idea -- I don't know what an omniscient person (aka God) will do, and in any case the answer is likely to be "depends on which morality we are talking about". Oh, and would an omniscient being call that water hot or cold?
0dxu9y
You'll need to define your terms for that, then. (And for the record, I don't use the words "hot" and "cold" exclusively; I also use terms like "warm" or "cool" or "this might be a great temperature for a swimming pool, but it's horrible for tea".) Also, if you weren't talking about temperature, why bother mentioning degrees Celsius when talking about "hotness" and "coldness"? Clearly temperature has something to do with it, or else you wouldn't have mentioned it, right? Because you can always replace a question of goodness with the question "Would an omniscient, moral person choose to take this action?". Just because you have no idea what the answer could be doesn't mean the true answer can fall outside the possible space of answers. For instance, you can't answer the question of "Would an omniscient moral reasoner choose to take this action?" with something like "fish", because that falls outside of the answer space. In fact, there are only two possible answers: "yes" or "no". It might be one; it might be the other, but my original point was that the answer to the question is guaranteed to be either "yes or "no", and that holds true even if you don't know what the answer is. There is only one "morality" as far as this discussion is concerned. There might be other "moralities" held by aliens or whatever, but the human CEV is just that: the human CEV. I don't care about what the Babyeaters think is "moral", or the Pebblesorters, or any other alien species you care to substitute--I am human, and so are the other participants in this discussion. The answer to the question "which morality are we talking about?" is presupposed by the context of the discussion. If this thread included, say, Clippy, then your answer would be a valid one (although even then, I'd rather talk game theory with Clippy than morality--it's far more likely to get me somewhere with him/her/it), but as it is, it just seems like a rather unsubtle attempt to dodge the question.
1Lumifer9y
I don't think so. You're making a circular argument -- good/bad is binary because there are only two possible states. I do not agree that there are only two possible states. Really? Either I'm not a participant in this discussion or you're wrong. See: a binary outcome :-D I have no idea what the human CEV is and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it's reality.
0dxu9y
Name a third alternative that is actually an answer, as opposed to some sort of evasion ("it depends"), and I'll concede the point. Also, I'm aware that this isn't your main point, but... how is the argument circular? I'm not saying something like, "It's binary, therefore there are two possible states, therefore it's binary"; I'm just saying "There are two possible states, therefore it's binary." Are you human? (y/n) Which part do you object to? The "coherent" part, the "extrapolated" part, or the "volition" part?
0Lumifer9y
"Doesn't matter". First of all you're ignoring the existence of morally neutral questions. Should I scratch my butt? Lessee, would an omniscient perfectly moral being scratch his/her/its butt? Oh dear, I think we're in trouble now... X-D Second, you're assuming atomicity of actions and that's a bad assumption. In your world actions are very limited -- they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways. Third, you're assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences. Fourth, for the great majority of dilemmas in life (e.g. "Should I take this job?", "Should I marry him/her?", "Should I buy a new phone?") the answer "what an omniscient moral being would choose" is perfectly useless. The concept of CEV seems to me to be the direct equivalent of "God's will" -- handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the "coherent" part while also having great doubts about the "extrapolated" part as well.
0dxu9y
(Side note: this conversation is taking a rather strange turn, but whatever.) If its butt feels itchy, and it would prefer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no external moral consequences to its decision (like, say, someone threatening to kill 3^^^3 people iff it scratches its butt)... well, it's increasing its own utility by scratching its butt, isn't it? If it increases its own utility by doing so and doesn't decrease net utility elsewhere, then that's a net increase in utility. Scratch away, I say. Sure. I agree I did just handwave a lot of stuff with respect to what an "action" is... but would you agree that, conditional on having a good definition of "action", we can evaluate "actions" morally? (Moral by human standards, of course, not Pebblesorter standards.) Agreed, but if you come up with a way to make good/moral decisions in the idealized situation of omniscience, you can generalize to uncertain situations simply by applying probability theory. Again, I agree... but then, knowledge of the Banach-Tarski paradox isn't of much use to most people. Fair enough. I don't have enough domain expertise to really analyze your position in depth, but at a glance, it seems reasonable.
0Lumifer9y
The assumption that morality boils down to utility is a rather huge assumption :-) Conditional on having a good definition of "action" and on having a good definition of "morally". I don't think so, at least not "simply". An omniscient being has no risk and no risk aversion, for example. Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries... :-)
0dxu9y
It's not an assumption; it's a normative statement I choose to endorse. If you have some other system, feel free to endorse that... but then we'll be discussing morality, and not meta-morality or whatever system originally produced your objection to Jiro's distinction between good and bad. Agree. Well, it could have risk aversion. It's just that risk aversion never comes into play during its decision-making process due to its omniscience. Strip away that omniscience, and risk aversion very well might rear its head. I disagree. Take the following two statements: 1. Morality, properly formalized, would be useful for practical purposes. 2. Morality is not currently properly formalized. There is no contradiction in these two statements.
1Lumifer9y
But they have a consequence: Morality currently is not useful for practical purposes. That's... an interesting position. Are you willing to live with it? X-) You can, of course define morality in this particular way, but why would you do that?
0Good_Burning_Plastic9y
By that definition, almost all actions are bad. Also, why the heck do you think there exist words for "better" and "worse"?
0dxu9y
True. I'm not sure why that matters, though. It seems trivially obvious to me that a random action selected out of the set of all possible actions would have an overwhelming probability of being bad. But most agents don't select actions randomly, so that doesn't seem to be a problem. After all, the key aspect of intelligence is that it allows you to it extremely tiny targets in configuration space; the fact that most configurations of particles don't give you a car doesn't prevent human engineers from making cars. Why would the fact that most actions are bad prevent you from choosing a good one? Those are relative terms, meant to compare one action to another. That doesn't mean you can't classify an action as "good" or "bad"; for instance, if I decided to randomly select and kill 10 people today, that would be a unilaterally bad action, even if it would theoretically be "worse" if I decided to kill 11 people instead of 10. The difference between the two is like the difference between asking "Is this number bigger than that number?" and "Is this number positive or negative?".
1Jiro9y
My opinion would change gradually between 100 degrees and 0 degrees. Either I would use qualifiers so that there is no abrupt transition, or else I would consider something to be hot in a set of situations and the size of that set would decrease gradually.
1dxu9y
Typo here?
0[anonymous]9y
YES because that is how economics work! You can´t take alot of money from ONE person without him getting poor but you CAN take money from alot of people without ruining them! Money is a circulating resource and just like pain you can recover form small losses after a time.
0[anonymous]9y
I think my last response starting with YES got lost somehow, so I will clarify here. I don´t follow the sequence because I don´t know where the critical limit is. Why? Because the critical limit is depending on other factors which i can´t foresee. Read up on basic global economy. But YES, in theory I can take little money from everyone without ruining a single one of them since it balances out, but if I take alot of money form one person I make him poor. That is how economics work, you can recover from small losses easily while some are too big to ever recover form, hence why some banks go bankrupt sometimes. And pain is similar since I can recover from a dust speck in my eye, but not from being tortured for 50 years. The dust specks are not permanent sacrifaces. If they were, I agree that they could stack up.
2Jiro9y
You may not know exactly where the limit is, but the point isn't that the limit is at some exact number, the point is that there is a limit. There's some point where your reasoning makes you go from good to bad even though the change is very small. Do you accept that such a limit exists, even though you may not know exactly where it is?
0[anonymous]9y
Yes I do.
2Jiro9y
So you recognize that your original statement about $1 versus bankruptcy also forces you to make the same conclusion about $20.00 versus $20.01 (or whatever the actual number is, since you don't know it). But making the conclusion about $20.00 versus $20.01 is much harder to justify. Can you justify it? You have to be able to, since it is implied by your original statement.
0[anonymous]9y
No I don´t have to make the same conclusion about 20.00 dollar versus 20.01. I left a safety margin when I said 1 dollar since I don´t want to follow the sequence but am very, very sure that 1 dollar is a safe number. I don´t know exactly how much I can risk taking from a random individual before I risk ruining him, but if I take only one dollar from a person who can afford a house and food, I am pretty safe.
0Jiro9y
Yes, you do. You just admitted it, although the number might not be 20. And whether you admit it or not it logically follows from what you said up above.
0[anonymous]9y
Maybe I didn´t understand you the first time.
1Jiro9y
Your belief about $1 versus bankruptcy logically implies a similar belief about $20.00 versus $20.01 (or whatever the actual numbers are). You can't just answer that that "might" be the case--if your original belief is as described, that is the case. You have to be willing to defend the logical consequence of what you said, not just defend the exact words that you said.
0[anonymous]9y
What do you mean with "whatever the actual numbers are". Numbers for what? For the amount that takes to ruin someone? As long as the individual donations doesn´t ruin the donators I accept a higher donation from a smaller population. Is that what you mean?
1Jiro9y
I just wrote 20 because I have to write something, but there is a number. This number has a value, even if you don't know it. Pretend I put the real number there instead of 20.
1[anonymous]9y
Yes, but still, what number? IF it is as I already suggested, the number for the amount of money that can be taken without ruining anyone, then I agree that we could take that amount of money instead of 1 dollar.
0dxu9y
So you're saying there exists such a number, such that taking that amount of money from someone wouldn't ruin them, but taking that amount plus a tiny bit more (say, 1 cent) would?
0Jiro9y
I don't think you understand. Yout original statement about $1 versus bankruptcy logically implies that there is a number such that that it is okay to take exactly that amount of money from a certain number of people, but wrong to take a very tiny amount more. Even though you don't know exactly what this number is, you know that it exists. Because this number is a logical consequence of what you said, you must be able to justify having such a number.
2[anonymous]9y
Yes, in my last comment I agreed to it. There is such a number. I don't think you understand my reasons why, which I already explained. It is wrong to take a tiny amoint more, since that will ruin them. I can'tknow ecactly what that is since global and local economy isn`t that stable. Tapping out.
1private_messaging9y
Now, do you have any actual argument as to why the 'badness' function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ? I don't think you do. This is why this stuff strikes me as pseudomath. You don't even state your premises let alone justify them.
0[anonymous]9y
You're right, I don't. And I do not really need it in this case. What I need is a cost function C(e,n) - e is some event and n is the number of people being subjected to said event, i.e. everyone gets their own - where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to "torture for 50 years" and "dust specks" so this generally makes sense at all. The reason why I would want to have such a cost function is because I believe that it should be more than infinitesimally worse for 3^^^^3 people to suffer than for 3^^^3 people to suffer. I don't think there should ever be a point where you can go "Meh, not much of a big deal, no matter how many more people suffer." If however the number of possible distinct people should be finite - even after taking into account level II and level III multiverses - due to discreteness of space and discreteness of permitted physical constants, then yes, this is all null and void. But I currently have no particular reason to believe that there should be such a bound, while I do have reason to believe that permitted physical constants should be from a non-discrete set.
-2private_messaging9y
Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there's only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ). I don't think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn't feel any stronger because there's more 'copies' of it running in perfect unison, it can't even tell the difference. It won't affect the subjective experience if the CPUs running the same computation are slightly physically different). edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you'll never exceed it with dust specks. Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it's going to keep growing without a limit, but that's simply not true.
1[anonymous]9y
I consider entities in computationally distinct universes to also be distinct entities, even if the arrangements of their neurons are the same. If I have an infinite (or sufficiently large) set of physical constants such that in those universes human beings could emerge, I will also have enough human beings. No. I will always find a larger number which is at least ε greater. I fixed ε before I talked about n,m. So I find numbers m_1,m_2,... such that C(dustspeck,m_j) > jε. Besides which, even if I had somehow messed up, you're not here (I hope) to score easy points because my mathematical formalization is flawed when it is perfectly obvious where I want to go.
0private_messaging9y
Well, in my view, some details of implementation of a computation are totally indiscernible 'from the inside' and thus make no difference to the subjective experiences, qualia, and the like. I definitely don't care if there's 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the actual infinity (as the physics of our universe would suggest), where the copies are what thinks and perceives everything exactly the same over the lifetime. I'm not sure how counting copies as distinct would cope with an infinity of copies anyway. You have a torture of inf persons vs dust specks in inf*3^^^3 persons, then what? Albeit it would be quite hilarious to see if someone here picks up the idea and starts arguing that because they're 'important', there must be a lot of copies of them in the future, and thus they are rightfully an utility monster.
0Kindly9y
Consider the flip side of the argument: would you rather get a dust speck in your eye or have a 1 in 3^^^3 chance of being tortured for 50 years? We take much greater risks without a moment's thought every time we cross the street. The chance that a car comes out of nowhere and hits you in just the right way to both paralyze you and cause incredible pain to you for the rest of your life may be very small; but it's probably not smaller than 1 in 10^100, let alone than 1 in 3^^^3.
3[anonymous]9y
I agree with this analysis provided there is some reason for linear aggregation. Why should the utility of the world be the sum of the utilities of its inhabitants? Why not, for instance, the min of the utilities of its inhabitants? I think that's what my intuition wants to do anyway: care about how badly off the worst-off person is, and try to improve that. U1(world) = min_people(u(person)) instead of U2(world) = sum_people(u(person)) so U1(torture) = -big, U1(dust) = -tiny U2(torture) = -big, U2(dust) = -outrageously massive Thus, if you use U1, you choose dust because -tiny > -big, but if you use U2, you choose torture because -big > -outrage. But I see no real reason to prefer one intuition over the other, so my question is this: Why linear aggregation of utilities?
0Wes_W9y
I find it hard to believe that you believe that. Under that metric, for example, "pick a thousand happy people and kill their dogs" is a completely neutral act, along with lots of other extremely strange results.
0gjm9y
Or, for a maybe more dramatic instance: "Find the world's unhappiest person and kill them". Of course total utilitarianism might also endorse doing that (as might quite a lot of people, horrible though it sounds, on considering just how wretched the lives of the world's unhappiest people probably are) -- but min-utilitarianism continues to endorse doing this even if everyone in the world -- including the soon-to-be-ex-unhappiest-person -- is extremely happy and very much wishes to go on living.
0Jiro9y
The specific problem which causes that is that most versions of utilitarianism don't allow the fact that someone desires not to be killed to affect the utility calculation, since after they have been killed, they no longer have utility.
0Wes_W9y
Yes, this is a failure mode of (some forms of?) utilitarianism, but not the specific weirdness I was trying to get at, which was that if you aggregate by min(), then it's completely morally OK to do very bad things to huge numbers of people - in fact, it's no worse than radically improving huge numbers of lives - as long as you avoid affecting the one person who is worst-off. This is a very silly property for a moral system to have. You can attempt to mitigate this property with too-clever objections, like "aha, but if you kill a happy person, then in the moment of their death they are temporarily the most unhappy person, so you have affected the metric after all". I don't think that actually works, but didn't want it to obscure the point, so I picked "kill their dog" as an example, because it's a clearly bad thing which definitely doesn't bump anyone to the bottom.
0[anonymous]9y
Oh, good point, maybe a kind of alphabetical ordering could break ties. So then, we disregard everyone who isn't affected by the possible action and maximize over the utilities of those who are. But still, this prefers a million people being punched once to any one person being punched twice, which seems silly --- I'm just trying to parse out my intuition for choosing dust specks. I get other possible methods being flawed is a mark for linear aggregation, but what positive reasons are there for it?
9jsartor74y
Min is a really bad metric - it means that, for example, my decision of whether to torture someone or not doesn't matter as long as someone out there is also getting tortured. So it doesn't actually lead to an answer of the dust speck problem. And if you limit it to the min of people involved, it leads to things like... "then it's better to break 1 billion people's non-dominant arms than one person's dominant arm" which in my opinion is absurd.

If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person

I find this reasoning problematic, because in the dust specks there is effectively nothing to acclimate to... the amount of inconvenience to the individual will always be smaller in the speck scenario (excluding secondary effects, such as the individual being distracted and ending up in a car crash, of course).

Which exa... (read more)

Well as long as we've gone to all the trouble to collect 85 comments on this topic, this seems like an great chance for a disagreement case study. It would be interesting to collect stats on who takes what side, and to relate that to their various kinds of relevant expertize. For the moment I disturbed by the fact that Eliezer and I seem to be in a minority here, but comforted a bit by the fact that we seem to know decision theory better than most. But I'm open to new data on the balance of opinion and the balance of relevant expertize.

Now, this is considerably better reasoning - however, there was no clue to this being a decision that would be selected over and over by countless of people. Had it been worded "you among many have to make the following choice...", I could agree with you. But the current wording implied that it was once-a-universe sort of choice.

The choice doesn't have to be repeated to present you with the dilemma. Since all elements of the problem are finite - not countless, finite - if you refuse all actions in the chain, you should also refuse the start of t... (read more)

-4homunq12y
"Swoops in, takes one child, and leaves"... wow. I'd like to say I can't imagine being so insensitive as to think this would be a good thing to do (even if not worth the money), but I actually can. And why would you use that horrible example, when the arguement would work just fine if you substituted "A permanent presence devoted to giving one person three square meals a day."

The diagnosis of scope insensitivity presupposes that people are trying to perform a utilitarian calculation and failing. But there is an ordinary sense in which a sufficiently small harm is no wrong. A harm must reach a certain threshold before the victim is willing to bear the cost of seeking redress. Harms that fall below the threshold are shrugged off. And an unenforced law is no law. This holds even as the victims multiply. A class action lawsuit is possible, summing the minuscule harms, but our moral intuitions are probably not based on those.

Actually, that was a poor example because taxing one penny has side effects. I would rather save one life and everyone in the world poked with a stick with no other side effects, because I put a substantial probability on lifespans being longer than many might anticipate. So even repeating this six billion times to save everyone's life at the price of 120 years of being repeatedly poked with a stick, would still be a good bargain.

Where there are no special inflection points, a bad repeated action should be a bad individual action, a good repeated action ... (read more)

Robin: dare I suggest that one area of relevant expertise is normative philosophy for-@#%(^^$-sake?!

It's just painful -- really, really, painful -- to see dozens of comments filled with blinkered nonsense like "the contradiction between intuition and philosophical conclusion" when the alleged "philosophical conclusion" hinges on some ridiculous simplistic Benthamite utilitarianism that nobody outside of certain economics departments and insular technocratic computer-geek blog communities actually accepts! My model for the torture cas... (read more)

3Pablo11y
As someone who has studied moral philosophy for many years, I would like to point out that I agree with Robin and Eliezer, and that I know many professional moral philosophers who would agree with them, too, if presented with this moral dilemma. It is also worth noting that, many comments above, Gaverick Matheny provided a link to a paper by a professional moral philosopher, published in one of the two most prestigious moral philosophy journals in the English-speaking world, which defends essentially the same conclusion. And as the argument presented in that paper makes clear, the conclusion that one should torture need not be motivated by a theoretical commitment to some substantive thesis about the nature of pain or aggregation (as Gowder claims), but follows instead by transitivity from a series of comparisons that everyone--including those who deny that conclusion--finds intuitively plausible.
3BerryPick611y
If anyone still has a hard time believing that this is not an unorthodox position among Philosophers, I'd like to recommend Shelly Kagan's excellent The Limits of Morality, which discusses 'radical consequentialism' and defends a similar conclusion.

dozens of comments filled with blinkered nonsense like "the contradiction between intuition and philosophical conclusion" when the alleged "philosophical conclusion" hinges on some ridiculous simplistic Benthamite utilitarianism that nobody outside of certain economics departments and insular technocratic computer-geek blog communities actually accepts!

You've quoted one of the few comments which your criticism does not apply to. I carry no water for utilitarian philosophy and was here highlighting its failure to capture moral intuition.

all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension)

Is a consistent and complete preference ordering without this property possible?

"An option that dominates in finite cases will always provably be part of the maximal option in finite problems; but in infinite problems, where there is no maximal option, the dominance of the option for the infinite case does not follow from its dominance in all finite cases."

From Peter's proof, it seems like you should be able to prove that an arbitrarily large (but finite) utility function will be dominated by events with arbitrarily large (but finite) improbabilities.

"Robin Hanson was correct, I do think that TORTURE is the obvious opti... (read more)

Elizer: "It's wrong when repeated because it's also wrong in the individual case. You just have to come to terms with scope sensitivity."

But determining whether or not a decision is right or wrong in the individual case requires that you be able to place a value on each outcome. We determine this value in part by using our knowledge of how frequently the outcomes occur and how much time/effort/money it takes to prevent or assuage them. Thus knowing the frequency that we can expect an event to occur is integral to assigning it a value in the fi... (read more)

Where there are no special inflection points, a bad repeated action should be a bad individual action, a good repeated action should be a good individual action. Talking about the repeated case changes your intuitions and gets around your scope insensitivity, it doesn't change the normative shape of the problem (IMHO).

Hmm, I see your point. I can't help like feeling that there are cases where repetition does matter, though. For instance, assuming for a moment that radical life-extension and the Singularity and all that won't happen, and assuming that we co... (read more)

Constant, my reference to your quote wasn't aimed at you or your opinions, but rather at the sort of view which declares that the silly calculation is some kind of accepted or coherent moral theory. Sorry if it came off the other way.

Nick, good question. Who says that we have consistent and complete preference orderings? Certainly we don't have them across people (consider social choice theory). Even to say that we have them within individual people is contestable. There's a really interesting literature in philosophy, for example, on the incommensura... (read more)

Who says that we have consistent and complete preference orderings?

Who says you need them? The question wasn't to quantify an exact balance. You just need to be sure enough to make the decision that one side outweighs the other for the numbers involved.

By my values, all else equal, for all x between 1 millisecond and fifty years, 10^1000 people being tortured for time x is worse than one person being tortured for time x*2. Would you disagree?

So, 10^1000 people tortured for (fifty years)/2 is worse than one person tortured for fifty years.
Then, 10^2000 peo... (read more)

Since Robin is interested in data... I chose SPECKS, and was shocked by the people who chose TORTURE on grounds of aggregated utility. I had not considered the possibility that a speck in the eye might cause a car crash (etc) for some of those 3^^^3 people, and it is the only reason I see for revising my original choice. I have no accredited expertise in anything relevant, but I know what decision theory is.

I see a widespread assumption that everything has a finite utility, and so no matter how much worse X is than Y, there must be a situation in which it... (read more)

Eliezer, a problem seems to be that the speck does not serve the function you want it to in this example, at least not for all readers. In this case, many people see a special penny because there is some threshold value below which the least bad bad thing is not really bad. The speck is intended to be an example of the least bad bad thing, but we give it a badness rating of one minus .9-repeating.

(This seems to happen to a lot of arguments. "Take x, which is y." Well, no, x is not quite y, so the argument breaks down and the discussion follow... (read more)

Okay, here's the data: I choose SPECKS, and here is my background and reasons.

I am a cell biologist. That is perhaps not relevant.

My reasoning is that I do not think that there is much meaning in adding up individual instances of dust specks. Those of you who choose TORTURE seem to think that there is a net disutility that you obtain by multiplying epsilon by 3^^^3. This is obviously greater than the disutility of torturing one person.
I reject the premise that there is a meaningful sense in which these dust specks can "add up".

You can think in... (read more)

Mitchell, I acknowledge the defensibility of the position that there are tiers of incommensurable utilities. But to me it seems that the dust speck is a very, very small amount of badness, yet badness nonetheless. And that by the time it's multiplied to ~3^^^3 lifetimes of blinking, the badness should become incomprehensibly huge just like 3^^^3 is an incomprehensibly huge number.

One reason I have problems with assigning a hyperreal infinitesimal badness to the speck, is that it (a) doesn't seem like a good description of psychology (b) leads to total lo... (read more)

-4linkhyrule511y
I'm not sure why surreal/hyperreal numbers result in, essentially, monofocus. Consider this scale on the surreals: * Omega^2: Utility of universal immortality; dis-utility of an existential risk. Omega utility for potentially omega people. * Omega: Utility of a human life. * 1: One traditional utilon. * Epsilon: Dust speck in your eye. Let's say you're a perfectly rational human (*cough cough*). You naturally start on the Omega^2 scale, with a certain finite amount of resources. Clearly, the worth of an omega of human lives is worth more than your own, so you do not repeat do not promptly donate them all to MIRI. At least, not until you first calculate the approximate probability that your independent existence will make it more likely that someone somewhere will finally defeat death. Even if you have not the intelligence to do it yourself, or the social skills to keep someone else stable while they attack it, there's still the fact that you can give more to MIRI, over the long run, if you live on just enough to keep yourself psychologically and physiologically sound and then donate the rest to MIRI. This is, essentially, the "sanity" term. Most of the calculation is done at this step, but because your life, across your lifespan, has some chance of solving death, you are not morally obligated to have yourself processed into Soylent Green. This step interrupts for one of three reasons. One, you have reached a point where spending further resources, either on yourself or some existential-risk organization, does not predictably affect an existential risk. Two, all existential risks are dealt with, and death itself has died. (Yay!) Three, part of ensuring your own psychological soundness requires it - really, this just represents the fact that sometimes, a dollar (approx. one utilon) or a speck (epsilon utilons) can result in your death or significant misery, but nevertheless such concerns should still be resolved in order of decreasing utility. At this point

"The notion of sacred values seems to lead to irrationality in a lot of cases, some of it gross irrationality like scope neglect over human lives and "Can't Say No" spending."

Could you post a scenario where most people would choose the option which unambiguously causes greater harm, without getting into these kinds of debates about what "harm" means? Eg., where option A ends with shooting one person, and option B ends with shooting ten people, but option B sounds better initially? We have a hard enough time getting rid of irrationality, even in cases where we know what is rational.

Eliezer: Why does anything have a utility at all? Let us suppose there are some things to which we attribute an intrinsic utility, negative or positive - those are our moral absolutes - and that there are others which only have a derivative utility, deriving from the intrinsic utility of some of their consequences. This is certainly one way to get incommensurables. If pain has intrinsic disutility and inconvenience does not, then no finite quantity of inconvenience can by itself trump the imperative of minimizing pain. But if the inconvenience might give rise to consequences with intrinsic disutility, that's different.

Dare I say that people may be overvaluing 50 years of a single human life? We know for a fact that some effect will be multiplied by 3^^^3 by our choice. We have no idea what strange an unexpected existential side effects this may have. It's worth avoiding the risk. If the question were posed with more detail, or specific limitations on the nature of the effects, we might be able to answer more confidently. But to risk not only human civilization, but ALL POSSIBLE CIVILIZATIONS, you must be DAMN SURE you are right. 3^^^3 makes even incredibly small doubts significant.

I wonder if my answers make me fail some kind of test of AI friendliness. What would the friendly AI do in this situation? Probably write poetry.

For Robin's statistics:
Given no other data but the choice, I would have to choose torture. If we don't know anything about the consequences of the blinking or how many times the choice is being made, we can't know that we are not causing huge amounts of harm. If the question deliberately eliminated these unknowns- ie the badness was limited to an eyeblink that does not immediately result in some disaster for someone or blindness for another, and you really are the one and only person making the choice ever, then I'd go with the dust-- But these qualific... (read more)

@Paul, I was trying to find a solution that didn't assume "b) all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension).", but rather established it for the case at hand. Unless it's specifically stated in the hypothetical that this is a true 1-shot choice (which we know it isn't in the real world, as we make analogous choices a... (read more)

Eliezer -- I think the issues we're getting into now require discussion that's too involved to handle in the comments. Thus, I've composed my own post on this question. Would you please be so kind as to approve it?

Recovering irrationalist: I think the hopefully-forthcoming-post-of-my-own will constitute one kind of answer to your comment. One other might be that one can, in fact, prefer huge dust harassment to a little torture. Yet a third might be that we can't aggregate the pain of dust harassment across people, so that there's some amount of single-person dust harassment that will be worse than some amount of torture, but if we spread that out, it's not.

For Robin's statistics:
Torture on the first problem, and torture again on the followup dilemma.

relevant expertise: I study probability theory, rationality and cognitive biases as a hobby. I don't claim any real expertise in any of these areas.

I think one of the reasons I finally chose specks is because the unlike implied, the suffering does not simply "add up": 3^^^3 people getting one dust speck in their eye is most certainly not equal to one person getting 3^^^3 dust specks in his eyes. It's not "3^^^3 units of disutility, total", it's one unit of disutility per person.

That still doesn't really answer the "one person for 50 years or two people for 49 years" question, though - by my reasoning, the second option would be preferrable, while obviously the first optio... (read more)

It is my impression that human beings almost universally desire something like "justice" or "fairness." If everybody had the dust speck problem, it would hardly be percieved as a problem. If one person is beign tortured, both the tortured person and others percieve unfairness, and society has a problem with this.

Actually, we all DO get dust motes in our eyes from time to time, and this is not a public policy issue.
In fact relatively small numbers of people ARE being tortured today, and this is a big problem both for the victims and for people who care about justice.

Beyond the distracting arithmetic lesson, this question reeks of Christianity, positing a situation in which one person's suffering can take away the suffering of others.

0homunq12y
This comment reeks of fuzzy reasoning.

For the moment I disturbed by the fact that Eliezer and I seem to be in a minority here, but comforted a bit by the fact that we seem to know decision theory better than most. But I'm open to new data on the balance of opinion and the balance of relevant expertize.

It seems like selection bias might make this data much less useful. (It applied it my case, at least.) The people who chose TORTURE were likely among those with the most familiarity with Eliezer's writings, and so were able to predict that he would agree with them, and so felt less inclined to respond. Also, voicing their opinion would be publicly taking an unpopular position, which people instinctively shy away from.

Paul: Yet a third might be that we can't aggregate the pain of dust harassment across people, so that there's some amount of single-person dust harassment that will be worse than some amount of torture, but if we spread that out, it's not.

My induction argument covers that. As long as, all else equal, you believe:

  • A googolplex people tortured for time x is worse than one person tortured for time x+0.00001%.
  • A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their
  • ... (read more)

    A googolplex people being dust speckled every second of their life without further ill effect

    I don't think this is directly comparable, because the disutility of additional dust specking to one person in a short period of time probably grows faster than linearly - if I have to blink every second for an hour, I'll probably get extremely frustrated on top of the slight discomfort of the specks themselves. I would say that one person getting specked every second of their life is significantly worse than a couple billion people getting specked once.

    the disutility of additional dust specking to one person in a short period of time probably grows faster than linearly

    That's why I used a googolplex people to balance the growth. All else equal, do you disagree with: "A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect" for the range concerned?

    one person getting specked every second of their life is significantly worse than a couple billion people getting specked once.

    I agree. I never said it wasn't.

    Have to run - will elaborate later.

    All else equal, do you disagree with: "A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect" for the range concerned?

    I agree with that. My point is that agreeing that "A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable" doesn't oblige me to agree that "A few billion* g... (read more)

    Just thought I'd comment that the more I think about the question, the more confusing it becomes. I'm inclined to think that if we consider the max utility state of every person having maximal fulfilment, and a "dust speck" as the minimal amount of "unfulfilment" from the top a person can experience, then two people experiencing a single "dust speck" is not quite as bad as a sigle person two "dust specks" below optimal. I think the reason I'm thinking that is that the second speck takes away more proportionally than ... (read more)

    I agree with that. My point is that agreeing that "A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable" doesn't oblige me to agree that "A few billion* googolplexes of people being dust specked once without further ill effect is worse than one person being horribly tortured for the shortest period experiencable".

    Neither would I, you don't need to. :-)

    The only reason I can pull this off is because 3^^^3 is such... (read more)

    ok, without reading the above comments... (i did read a few of them, including robin hanson's first comment - don't know if he weighed in again).

    dust specks over torture.

    the apparatus of the eye handles dust specks all day long. i just blinked. it's quite possible there was a dust speck in there somewhere. i just don't see how that adds up to anything, even if a very large number is invoked. in fact with a very large number like the one described it is likely that human beings would evolve more efficient tear ducts, or faster blinking, or something like t... (read more)

    Recovering irrationalist: in your induction argument, my first stab would be to deny the last premise (transitivity of moral judgments). I'm not sure why moral judgments have to be transitive.

    Next, I'd deny the second-to-last premise (for one thing, I don't know what it means to be horribly tortured for the shortest period possible -- part of the tortureness of torture is that it lasts a while).

    Eliezer, both you and Robin are assuming the additivity of utility. This is not justifiable, because it is false for any computationally feasible rational agent.

    If you have a bounded amount of computation to make a decision, we can see that the number of distinctions a utility function can make is in turn bounded. Concretely, if you have N bits of memory, a utility function using that much memory can distinguish at most 2^N states. Obviously, this is not compatible with additivity of disutility, because by picking enough people you can identify more disti... (read more)

    It's truly amazing the contortions many people have gone through rather than appear to endorse torture. I see many attempts to redefine the question, categorical answers that basically ignore the scalar, and what Eliezer called "motivated continuation".

    One type of dodge in particular caught my attention. Paul Gowder phrased it most clearly, so I'll use his text for reference:

    ...depends on the following three claims:

    a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,

    "Unproblematically&quo... (read more)

    2Kenny11y
    The "Rand-style selfishness" mars an otherwise sound comment.

    Recovering irrationalist: in your induction argument, my first stab would be to deny the last premise (transitivity of moral judgments). I'm not sure why moral judgments have to be transitive.

    I acknowledged it won't hold for every moral. There are some pretty barking ones out there. I say it holds for choosing the option that creates less suffering. For finite values, transitivity should work fine.

    Next, I'd deny the second-to-last premise (for one thing, I don't know what it means to be horribly tortured for the shortest period possible -- part of the tort... (read more)

    Recovering irrationalist, I hadn't thought of things in precisely that way - just "3^^4 is really damn big, never mind 3^^7625597484987" - but now that you point it out, the argument by googolplex gradations seems to me like a much stronger version of the arguments I would have put forth.

    It only requires 3^^5 = 3^(3^7625597484987) to get more googolplex factors than you can shake a stick at. But why not use a googol instead of a googolplex, so we can stick with 3^^4? If anything, the case is more persuasive with a googol because a googol is mor... (read more)

    Tom, your claim is false. Consider the disutility function

    D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))

    Now, with this function, disutility increases monotonically with the number of people with specks in their eyes, satisfying your "slight aggregation" requirement. However, it's also easy to see that going from 0 to 1 person tortured is worse than going from 0 to any number of people getting dust specks in their eyes, including 3^^^3.

    The basic objection to this kind of functional form is that it's not additive. Howe... (read more)

    Again, not everyone agrees with the argument that unbounded utility functions give rise to Dutch books. Unbounded utilities only admit Dutch books if you do allow a discontinuity between infinite rewards and the limit of increasing finite awards, but you don't allow a discontinuity between infinite planning and the limit of increasing finite plans.

    Oh geez. Originally I had considered this question uninteresting so I ignored it, but considering the increasing devotion to it in later posts, I guess I should give my answer.

    My justification, but not my answer, depends upon what how the change is made.

    -If the offer is made to all of humanity before being implemented ("Do you want to be the 'lots of people get specks race' or the 'one guy gets severe torture' race?") I believe people could all agree to the specks by "buying out" whoever eventually gets the torture. For an immeasurabl... (read more)

    the argument by googolplex gradations seems to me like a much stronger version of the arguments I would have put forth.

    You just warmed my heart for the day :-)

    But why not use a googol instead of a googolplex

    Shock and awe tactics. I wanted a featureless big number of featureless big numbers, to avoid wiggle-outs, and scream "your intuition ain't from these parts". In my head, FBNs always carry more weight than regular ones. Now you mention it, their gravity could get lightened by incomprehensibility, but we we're already counting to 3^^^3.

    Googol is better. Less readers will have to google it.

    @Neel.

    Then I only need to make the condition slightly stronger: "Any slight tendency to aggregation that doesn't beg the question." Ie, that doesn't place a mathematical upper limit on disutility(Specks) that is lower than disutility(Torture=1). I trust you can see how that would be simply begging the question. Your formulation:

    D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))

    ...doesn't meet this test.

    Contrary to what you think, it doesn't require unbounded utility. Limiting the lower bound of the range to (say) 2 * ... (read more)

    With so many so deep in reductionist thinking, I'm compelled to stir the pot by asking how one justifies the assumption that the SPECK is a net negative at all, aggregate or not, extended consequences or not? Wouldn't such a mild irritant, over such a vast and diverse population, act as an excellent stimulus for positive adaptations (non-genetic, of course) and likely positive extended consequences?

    A brilliant idea, Jef! I volunteer you to test it out. Start blowing dust around your house today.

    Hrm... Recovering's induction argument is starting to sway me toward TORTURE.

    More to the point, that and some other comments are starting to sway me away from the thought that disutility of single dust speck events per person becomes sublinear as people experiencing it increases (but total population is held constant)

    I think if I made some errors, they were partly was caused by "I really don't want to say TORTURE", and partly caused by my mistaking the exact nature of the nonlinearity. I maintain "one person experiencing two dust specks"... (read more)

    "A brilliant idea, Jef! I volunteer you to test it out. Start blowing dust around your house today."

    Although only one person, I've already begun, and have entered in my inventor's notebook some apparently novel thinking on not only dust, but mites, dog hair, smart eyedrops, and nanobot swarms!

    Tom, if having an upper limit on disutility(Specks) that's lower than disutility(Torture1) is begging the question in favour of SPECKS then why isn't not* having such an upper limit begging the question in favour of TORTURE?

    I find it rather surprising that so many people agree that utility functions may be drastically nonlinear but are apparently completely certain that they know quite a bit about how they behave in cases as exotic as this one.

    Tom, if having an upper limit on disutility(Specks) that's lower than disutility(Torture1) is begging the question in favour of SPECKS then why isn't not* having such an upper limit begging the question in favour of TORTURE?

    It should be obvious why. The constraint in the first one is neither argued for nor agreed on and by itself entails the conclusion being argued for. There's no such element in the second.

    I think we may be at cross purposes; my apologies if we are and it's my fault. Let me try to be clearer.

    Any particular utility function (if it's real-valued and total) "begs the question" in the sense that it either prefers SPECKS to TORTURE, or prefers TORTURE to SPECKS, or puts them exactly equal. I don't see how this can possibly be considered a defect, but if it is one then all utility functions have it, not just ones that prefer SPECKS to TORTURE.

    Saying "Clearly SPECKS is better than TORTURE, because here's my utility function and it sa... (read more)

    g: that's exactly what I'm saying. In fact, you can show something stronger than that.

    Suppose that we have an agent with rational preferences, and who is minimally ethical, in the sense that they always prefer fewer people with dust specks in their eyes, and fewer people being tortured. This seems to be something everyone agrees on.

    Now, because they have rational preferences, we know that a bounded utility function consistent with their preferences exists. Furthermore, the fact that they are minimally ethical implies that this function is monotone in the... (read more)

    I have argued in previous comments that the utility of a person should be discounted by his or her measure, which may be based on algorithmic complexity. If this "torture vs specks" dilemma is to have the same force under this assumption, we'd have to reword it a bit:

    Would you prefer that the measure of people horribly tortured for fifty years increases by x/3^^^3, or that the measure of people who get dust specks in their eyes increases by x?

    I argue that no one, not even a superintelligence, can actually face such a choice. Because x is at most ... (read more)

    A consistent utilitarian would choose the torture, but I don't think it's the moral choice.

    Let's bring this a little closer to home. Hypothetically, let's say you get to live your life again 3^^^3 times. Would you prefer to have an additional dust speck in your eye in each of your future lives, or else be tortured for 50 years in a single one of them?

    Any takers for the torture?

    4Salivanth12y
    Man that's a good one. It's certainly interesting to know that my ability to override intuition when it comes to large numbers is far less effective when the question is applied to me personally. I'm assuming that this question assumes no other ill effects from the specks. And I know I should pick the torture. I know that if the torture is the best outcome for other people, it's the best outcome for myself. But if I was given that choice in real life, I don't think I would as of writing this comment. I have some correcting to do.
    0Salivanth12y
    Actually, I ended up resolving this at some point. I would in fact pick the dust specks in this case, because the situations aren't identical. I'd spend a lot of time in my 3^^^3 lives worrying if I'm going to start being tortured for 50 years, but I wouldn't worry about the dust specks. Technically, the disutility of the dust specks is worse, but my brain can't comprehend the number "3^^^3", so it would worry more about the torture happening to me. Adding in the disutility of worrying about the torture, even a small amount, across 3^^^3 / 2 lives, and it's clear that I should pick the dust specks for myself in this situation, regardless of whether or not I choose torture in the original problem.
    1AnotherIdiot12y
    This is sort of avoiding the question. What if you made the choice, but then had your memory erased about the whole dilemma right afterwards? Assuming you knew before making your choice that your memory would be erased, of course.
    1Salivanth12y
    Then I choose the torture. I've grown a bit more comfortable with overriding intuition in regards to extremely large numbers since my original reply 3 months ago.

    I'll take it, as long as it's no more likely to be one of the earliest lives. I don't trust any universe that can make 3^^^3 of me not to be a simulation that would get pulled early.

    Hrm... Recovering's induction argument is starting to sway me toward TORTURE.

    Interesting. The idea of convincing others to decide TORTURE is bothering me much more than my own decision.

    I hope these ideas never get argued out of context!

    Cooking something for two hours at 350 degrees isn't equivalent to cooking something at 700 degrees for one hour.

    I'd rather accept one additional dust speck per lifetime in 3^^^3 lives than have one lifetime out of 3^^^3 lives involve fifty years of torture.

    Of course, that's me saying that, with my single life. If I actually had that many lives to live, I might become so bored that I'd opt for the torture merely for a change of pace.

    Recovering: chuckles no, I meant thinking about that, and rethinking about what the actual properties of what I'd consider to be a reasonable utility function led me to reject my earlier claim of the specific nonlinearity that lead to my assumption that as you increase the number of people that recieve a spec, the disutility is sublinear, and now I believe it to be linear. So huge bigbigbigbiggigantaenormous num specks would, of course, eventually have to have more disutility than the torture. But since to get to that point knuth arrow notation had to be i... (read more)

    I'd take it.
    I find your choice/intuition completely baffling, and I would guess that far less than 1% of people would agree with you on this, for whatever that's worth (surely it's worth something.) I am a consequentialist and have studied consequentialist philosophy extensively (I would not call myself an expert), and you seem to be clinging to a very crude form of utilitarianism that has been abandoned by pretty much every utilitarian philosopher (not to mention those who reject utilitarianism!). In fact, your argument reads like a reductio ad absurdum ... (read more)

    No Mike, your intuition for really large numbers is non-baffling, probably typical, but clearly wrong, as judged by another non-Utilitarian consequentialist (this item is clear even to egoists).

    Personally I'd take the torture over the dust specks even if the number was just an ordinary incomprehensible number like say the number of biological humans who could live in artificial environments that could be built in one galaxy. (about 10^46th given a 100 year life span and a 300W (of terminal entropy dump into a 3K background from 300K, that's a large budge... (read more)

    So, if additive utility functions are naive, does that mean I can swap around your preferences at random like jerking around a puppet on a string, just by having a sealed box in the next galaxy over where I keep a googol individuals who are already being tortured for fifty years, or already getting dust specks in their eyes, or already being poked with a stick, etc., which your actions cannot possibly affect one way or the other?

    It seems I can arbitrarily vary your "non-additive" utilities, and hence your priorities, simply by messing with the nu... (read more)

    Michael Vassar:
    Well, in the prior comment, I was coming at it as an egoist, as the example demands.
    It's totally clear to me that a second of torture isn't a billion billion billion times worse than getting a dust speck in my eye, and that there are only about 1.5 billion seconds in a 50 year period. That leaves about a 10^10 : 1 preference for the torture.
    I reject the notion that each (time,utility) event can be calculated in the way you suggest. Successive speck-type experiences for an individual (or 1,000 successive dust specks for 1,000,000 individua... (read more)

    To continue this business of looking at the problem from different angles:

    Another formulation, complementary to Andrew Macdonald's, would be: Should 3^^^3 people each volunteer to experience a speck in the eye, in order to save one person from fifty years of torture?

    And with respect to utility functions: Another nonlinear way to aggregate individual disutilities x, y, z... is just to take the maximum, and to say that a situation is only as bad as the worst thing happening to any individual in that situation. This could be defended if one's assignment of ... (read more)

    I find it positively bizarre to see so much interest in the arithmetic here, as if knowing how many dust flecks go into a year of torture, just as one knows that sixteen ounces go into one pint, would inform the answer.

    What happens to the debate if we absolutely know the equation:

    3^^^3 dustflecks = 50 years of torture

    or

    3^^^3 dustflecks = 600 years of torture

    or

    3^^^3 dustfleck = 2 years of torture ?

    The nation of Nod has a population of 3^^^3. By amazing coincidence, every person in the nation of Nod has $3^^^3 in the bank. (With a money suplly like that, those dollars are not worth much.) By yet another coincidence, the government needs to raise revenues of $3^^^3. (It is a very efficient government and doesn't need much money.) Should the money be raised by taking $1 from each person, or by simply taking the entire amount from one person?

    I take $1 from each person. It's not the same dilemma.

    ----

    Ri:The idea of convincing others to decide TORTURE is bothering me much more than my own decision.

    PK:I don't think there's any worry that I'm off to get my "rack winding certificate" :P

    Yes, I know. :-) I was just curious about the biases making me feel that way.

    individual living 3^^^3 times...keep memories and so on of all previous lives

    3^^^3 lives worth of memories? Even at one bit per life, that makes you far from human. Besides, you're likely to get tortured in googolplexes of those lif... (read more)

    Andrew Macdonald asked:
    Any takers for the torture?
    Assuming the torture-life is randomly chosen from the 3^^^3 sized pool, definitely torture. If I have a strong reason to expect the torture life to be found close to the beginning of the sequence, similar considerations as for the next answer apply.

    Recovering irrationalist asks:
    OK here goes... it's this life. Tonight, you start fifty years being loved at by countless sadistic Barney the Dinosaurs. Or, for all 3^^^3 lives you (at your present age) have to singalong to one of his songs. BARNEYLOVE or SONGS?
    ... (read more)

    Cooking something for two hours at 350 degrees isn't equivalent to cooking something at 700 degrees for one hour.

    Caledonian has made a great analogy for the point that is being made on either side. May I over-work it?

    They are not equivalent, but there is some length of time at 350 degrees that will burn as badly as 700 degrees. In 3^^^3 seconds, your lasagna will be ... okay, entropy will have consumed your lasagna by then, but it turns into a cloud of smoke at some point.

    Correct me if I am wrong here, but I don't think there is any length of time at 75 ... (read more)

    Zubon, we could formalize this with a tiered utility function (one not order-isomorphic to the reals, but containing several strata each order-isomorphic to the reals).

    But then there is a magic penny, a single sharp divide where no matter how many googols of pieces you break it into, it is better to torture 3^^^3 people for 9.99 seconds than to torture one person for 10.01 seconds. There is a price for departing the simple utility function, and reasons to prefer certain kinds of simplicity. I'll admit you can't slice it down further than the essentially ... (read more)

    ...except that, if I'm right about the biases involved, the Speckists won't be horrified at each other.

    If you trade off thirty seconds of waterboarding for one person against twenty seconds of waterboarding for two people, you're not visibly treading on a "sacred" value against a "mundane" value. It will rouse no moral indignation.

    Indeed, if I'm right about the bias here, the Speckists will never be able to identify a discrete jump in utility across a single neuron firing, even though the transition from dust speck to torture can be br... (read more)

    1RST6y
    I think that we “speckists” see injuries as poisons: they can destroy people lives only if they reach a certain concentration. So a greater but far more diluted pain can be less dangerous than a smaller but more concentrated one. 50 and 49 years of torture are still far over the threshold. One or two dust specks, on the other hand, are far below.
    1RST6y
    I think it's worst for 3^999999999 people to feel two dust specks than for 3^1000000000 people to feel one dust speck. After all the next step is that it is worst for 3^1000000000 people to feel one dust speck than for 3^1000000001 people to feel less than one dust speck, which seem right.

    Assuming that there are 3^^^3 distinct individuals in existence, I think the answer is pretty obvious- pick the torture. However, the fact that we cannot possibly hope to visualize so many individuals it's a pointlessly large number. In fact, I would go so low as one quadrillion human beings with dust specks in their eyes outweighs one individual's 50 years of torture. Consider- one quadrillion seconds of minute but noticeable pain versus a scant fifty years of tortured hell. One quadrillion seconds is about 31,709,792 years. Let's just go with 32 mil... (read more)

    My initial reaction (before I started to think...) was to pick the dust specks, given that my biases made the suffering caused by the dust specks morally equivalent to zero, and 0^^^3 is still 0.

    However, given that the problem stated an actual physical phenomenon (dust specks), and not a hypothetical minimal annoyance, then you kind of have to take the other consequences of the sudden appearance of the dust specks under consideration, don't you?

    If I was omnipotent, and I could make everyone on Earth get a dust speck in their eye right now, how many car acc... (read more)

    3ArisKatsaris13y
    It is cheating to answer this by using worse individual consequences than the dust specks themselves. The very point of the question is the infinitesimality of each individual disutility.
    1Nornagest13y
    The more I think about the question, the more I'm convinced that it attempts to demonstrate the commensurability of disutility by invoking the commensurability of disutility.
    0TheOtherDave13y
    I don't see how it's attempting to demonstrate the commensurability of disutility at all; it seems to be using the assumed commensurability of disutility to challenge intuitions about disutility. Can you say more about what is convincing you?
    1Nornagest13y
    If the OP's challenging a moral intuition that doesn't at some point reduce to commensurability, then I don't know what it is. It asks us to imagine the worst thing that could happen to a random person, and then the least perceptibly bad thing that could happen, and seems to be making the argument that an unimaginably huge number of the latter would trump a single instance of the former. What's that a reductio for, if not the assumption that torture (or anything comparably bad) carries a special kind of disutility? On the other hand I'm not sure what the post was written in response to, if anything, so there might be some contextual information there that I'm missing.
    0TheOtherDave13y
    I'm... puzzled by this exchange. But, yes, agreed that a lot of objections to this post implicitly assert that torture is incommensurable with dust-specks, and EY is challenging that intuition.

    I have a question/answer in relation to this post that seems to be off-topic for the forum. Click on my name if interested.

    Since I would not be one of the people affected I would not consider myself able to make that decision alone. In fact my preferences are irrelevant in that situation even if I consider situation to be obvious.

    To have situation with 3^^^3 people we must have at least that many people capable of existing in some meaningful way. I assume we cannot query them about their preferences in any meaningful (omniscient) way. As I cannot choose who will be tortured or who gets dust specks I have to make collective decission.

    I think that my solution would be to take t... (read more)

    Evolution seems to have favoured the capacity for empathy (the specks choice) over the capacity for utility calculation, even though utility calculation would have been a 'no brainer' for the brain capacity we have.
    The whole concept reminds me of the Turing test. Turing, as a mathematician, just seems to have completely failed to understand that we don't assign rationality, or sentience, to another object by deduction. We do it by analogy.

    I know that this is only a hypothetical example, but I must admit that I'm fairly shocked at the number of people indicating that they would select the torture option (as long as it wasn't them being tortured). We should be wary of the temptation to support something unorthodox for the effect of: "Hey, look at what a hardcore rationalist I can be." Real decisions have real effects on real people.

    And we should be wary to select something orthodox for fear of provoking shock and outrage. Do you have any reason to believe that the people who say they prefer TORTURE to SPECKS are motivated by the desire to prove their rationalist credentials, or that they don't appreciate that their decisions have real consequences?

    Jeffrey, on one of the other threads, I volunteered to be the one tortured to save the others from the specks.

    As for "Real decisions have real effects on real people," that's absolutely correct, and that's the reason to prefer the torture. The utility function implied by preferring the specks would also prefer lowering all the speed limits in the world in order to save lives, and ultimately would ban the use of cars. It would promote raising taxes by a small amount in order to reduce the amount of violent crime (including crimes involving torture... (read more)

    Following your heart and not your head - refusing to multiply - has also wrought plenty of havoc on the world, historically speaking. It's a questionable assertion (to say the least) that condoning irrationality has less damaging side effects than condoning torture.

    8rkyeun12y
    I think you've constructed your utility wrong in this instance. Without losing track of scope, we have 3^^^3 motes of dust in 3^^^3 eyes. And yes, that outweighs 50 years of torture, if and only if people have zero tolerance. But people don't break down into sobbing messes at the (literally at least) slightest provocation. There is a small threshold of badness that can happen to someone without them caring, and as long as all 3^^^3 of them only get epsilon below that, the total suffering for all 3^^^3 of them summed is exactly 0. We have 3^^^3 people, and 3^^^3 motes of dust, but also 3^^^3 separate emotional shock absorbers that take that speck of dust without flinching. It is non-linear. If you keep adding dust, eventually it starts breaking people's shock absorbers. And once those 3^^^3 people start experiencing nonzero suffering, it would quickly add up to more than fifty man-years of torture. Then the equation stops favoring dust motes. And here I hope I have some other recourse, because "If you ever find yourself thinking that torture is the right thing to do," is one of my warnings. I hope I can come out clever enough to take a third option where nobody gets tortured.
    -1polymathwannabe10y
    I wish I could upvote this 3^^^3 times.
    4gjm10y
    But Eliezer's original description said this: It's an essential part of the setup that the disutility of a "dust speck" is not zero.
    0rkyeun10y
    Let me change "noticing" to "caring" then. Thank you for the correction.

    "Following your heart and not your head - refusing to multiply - has also wrought plenty of havoc on the world, historically speaking. It's a questionable assertion (to say the least) that condoning irrationality has less damaging side effects than condoning torture."

    I'm not really convinced that multiplication of the dust-speck effect is relevant. Subjective experience is restricted to individuals, not collectives. To me, this specific exercise reduces to a simpler question: Would it be better (more ethical) to torture individual A for 50 years,... (read more)

    Jeffrey wrote: To me, this specific exercise reduces to a simpler question: Would it be better (more ethical) to torture individual A for 50 years, or inflict a dust speck on individual B? Gosh. The only justification I can see for that equivalence would be some general belief that badness is simply independent of numbers. Suppose the question were: Which is better, for one person to be tortured for 50 years or for everyone on earth to be tortured for 49 years? Would you really choose the latter? Would you not, in fact, jump at the chance to be the single ... (read more)

    Jeffrey, do you really think serial killing is no worse than murdering a single individual, since "Subjective experience is restricted to individuals"?

    In fact, if you kill someone fast enough, he may not subjectively experience it at all. In that case, is it no worse than a dust speck?

    "Suppose the question were: Which is better, for one person to be tortured for 50 years or for everyone on earth to be tortured for 49 years? Would you really choose the latter? Would you not, in fact, jump at the chance to be the single person for 50 years if that were the only way to get that outcome rather than the other one?"

    My criticism was for this specific initial example, which yes did seem "obvious" to me. Very few, if any, ethical opinions can be generalized over any situation and still seem reasonable. At least by my definiti... (read more)

    I can see myself spending too much time here, so I'm going to finish-up and ya'll can have the last word. I'll admit that it's possible that one or more of you actually would sacrifice yourself to save others from a dust speck. Needless to say, I think it would be a huge mistake on your part. I definitely wouldn't want you to do it on my behalf, if for nothing more than selfish reasons: I don't want it weighing on my conscience. Hopefully this is a moot point anyway, since it should be possible to avoid both unwanted dust specks and unwanted torture (eg. via a Friendly AI). We should hope that torture dies-away with the other tragedies of our past, and isn't perpetuated into our not-yet-tarnished future.

    I know you're all getting a bit bored, but I'm curious what you think about a different scenario:

    What if you have to choose between (a) for the next 3^^^3 days, you get an extra speck in your eye per day than normally, and 50 years you're placed in stasis, or (b) you get the normal amount of specks in your eyes, but during the next 3^^^3 days you'll pass through 50 years of atrocious torture.

    Everything else is considered equal in the other cases, including the fact that (i) your total lifespan will be the same in both cases (more than 3^^^3 days), (ii) th... (read more)

    OK, I see I got a bit long-winded. The interesting part of my question is if you'd take the same decision if it's about you instead of others. The answer is obvious, of course ;-)

    The other details/versions I mentioned are only intended to explore the "contour of the value space" of the other posters. (: I'm sure Eliezer has a term for this, but I forget it.)

    Bogdan's presented almost exactly the argument that I too came up with while reading this thread. I would choose the specks in that argument and also in the original scenario (as long as I am not committing to the same choice being repeated an arbitrary number of times, and I am not causing more people to crash their cars than I cause not to crash their cars; the latter seems like an unlikely assumption, but thought experiments are allowed to make unlikely assumptions, and I'm interested in the moral question posed when we accept the assumption). Based on ... (read more)

    I came across this post only today, because of the current comment in the "recent comments" column. Clearly, it was an exercise that drew an unusual amount of response. It further reinforces

    my impression of much of the OB blog, posted in August, and denied by email.

    I think you should ask everyone until you have at least 3^^^3 people whether they would consent to having a dust speck fly into their eye to save someone from torture. When you have enough people just put dust specks into their eyes and save the others.

    The question is, of course, silly. It is perfectly rational to decline to answer. I choose to try to answer.

    It is also perfectly rational to say "it depends". If you really think "a dust speck in 3^^^3 eyes" gives a uniquely defined probability distribution over different subsets of possibilityverse, you are being ridiculous. But let's pretend it did - let's pretend we had 3^^^^3 parallel Eleizers, standing on flat golden surfaces in 1G and one atmosphere, for just long enough to ask each other enough enough questions to define the prob... (read more)

    Tim: You're right - if you are a reasonably attractive and charismatic person. Otherwise, the question (from both sides) is worse than the dust speck.

    (Asking people also puts you in the picture. You must like to spend eternity asking people a silly question, and learning all possible linguistic vocalizations in order to do so. There are many fewer vocalizations than possible languages, and many fewer possible human languages than 3^^^3. You will be spending more time going from one person of the SAME language to another, at 1 femtosecond per journey, than ... (read more)

    Torture is not the obvious answer, because torture-based suffering and dust-speck-based suffering are not scalar quantities with the same units.

    To be able to make a comparison between two quantities, the units must be the same. That's why we can say that 3 people suffering torture for 49.99 years is worse than 1 person suffering torture for 50 years. Intensity Duration Number of People gives us units of PainIntensity-Person-Years, or something like that.

    Yet torture-based suffering and dust-speck-based suffering are not measured in the same units. Consequ... (read more)

    5Cyan15y
    You seem to have gotten hung up on 3^^^3, which is really just a placeholder for "some finite number so large it boggles the mind". If you accept that all types of pain can be measured on a common disutility scale, then all you need is a non-zero conversion factor, and the repugnant conclusion follows (for some mind-bogglingly large number of specks). I think that if a line of argument that rescues your rebuttal exists, it involves lexicographic preferences.

    There is a false choice being offered, because every person in every lifetime is going to experience getting something in their eye, I get a bug flying into my eye on a regular basis whenever I go running (3 of them the last time!) and it'll probably have happened thousands of times to me at the end of my life. It's pretty much a certainty of human experience (Although I suppose it's statistically possible for some people to go through life without ever getting anything in their eyes).

    Is the choice being offered to make all humanities eyes for all eternity immune to small inconveniences such as bugs, dust or eyelashes? Otherwise we really aren't being offered anything at all.

    0Bugle15y
    Although if we factor in consequences, say... being distracted by a dust speck in the eye while driving or doing any other such critical activity then statistically those trillions of dust specks have the potential to cause untold amounts of damage and suffering

    Doesn't "harm", to a consequentialist, consist of every circumstance in which things could be better, but aren't ? If a speck in the eye counts, then why not, for example, being insufficiently entertained ?

    If you accept consequentialism, isn't it morally right to torture someone to death so long as enough people find it funny ?

    5Alicorn15y
    I'm picking on this comment because it prompted this thought, but really, this is a pervasive problem: consequentialism is a gigantic family of theories, not just one. They are all still wrong, but for any single counterexample, such as "it's okay to torture people if lots of people would be thereby amused", there is generally at least one theory or subfamily of theories that have that counterexample covered.
    1PowerSet15y
    Isn't it paradoxical to argue against consequentialism based on its consequences? The reason you can't torture people is that those members of your population who aren't as dumb as bricks will realize that the same could happen to them. Such anxiety among the more intelligent members of your society should outweigh the fun experienced by the more easily amused.
    5Alicorn15y
    I typically argue against consequentialism based on appeals to intuition and its implications, which are only "consequences" in the sense used by consequentialism if you do some fancy equivocating. Pfft. It is trivially easy to come up with thought experiments where this isn't the case. You can increase the ratio of bricks-to-brights until doing the arithmetic leads to the result that you should go ahead and torture folks. You can choose folks to torture on the basis of well-publicized, uncommon criteria, so that the vast majority of people rightly expect it won't happen to them or anyone they care about. You can outright lie to the population, and say that the people you torture are all volunteers (possibly even masochists who are secretly enjoying themselves) contributing to the entertainment of society for altruistic reasons. Heck, after you've tortured them for a while, you can probably get them to deliver speeches about how thrilled they are to be making this sacrifice for the common morale, on the promise that you'll kill them quicker if they make it convincing. All that having been said, there are consequentialist theories that do not oblige or permit the torture of some people to amuse the others. Among them are things like side-constraints rights-based consequentialisms, certain judicious applications of deferred-hedon/dolor consequentialisms, and negative utilitarianism (depending on how the entertainment of the larger population cashes out in the math).

    It seems that many, including Yudkowsky, answer this question by making the most basic mistake, i.e. by cheating - assuming facts not in evidence.

    We don't know anything about (1) the side-effects of picking SPECKS (such as car crashes); and definitely don't know that (2) the torture victim can "acclimate". (2) in particular seems like cheating in a big way - especially given the statement "without hope or rest".

    There's nothing rational about posing a hypothetical and then adding in additional facts in your answer. However, that's a great way to avoid the question presented.

    -1R_Nebblesworth15y
    I've received minus 2 points (that's bad I guess?) with no replies, which is very illuminating... I suppose I'm just repeating the above points on lexicographic preferences. Any answer to the question involves making value choices about the relative harms associated with torture and specks, I can't see how there's an "obvious" answer at all, unless one is arrogant enough to assume their value choices are universal and beyond challenge. Unless you add facts and assumptions not stated, the question compares torture x 50 years to 1 dust speck in an infinite number people's eyes, one time. Am I missing something? Because it seems It can't be answered without reference to value choices - which to anyone who doesn't share those values will naturally appear irrational.
    7CarlShulman15y
    "I've received minus 2 points (that's bad I guess?) with no replies, which is very illuminating... " I think this is mainly because your comment seemed uninformed by the relevant background but was presented with a condescending and negative tone. Comments with both these characteristics tend to get downvoted, but if you cut back on one or the other you should get better responses. "It seems that many, including Yudkowsky, answer this question by making the most basic mistake, i.e. by cheating - assuming facts not in evidence." http://lesswrong.com/lw/2k/the_least_convenient_possible_world/ "Any answer to the question involves making value choices" Yes it does. "compares torture x 50 years to 1 dust speck in an infinite number people's eyes" 3^^^3 is a (very large) finite number. "It can't be answered without reference to value choices - which to anyone who doesn't share those values will naturally appear irrational." Moral anti-realists don't have to view differences in values as reflecting irrationality.
    0R_Nebblesworth15y
    Fair enough, apologies for the tone. But if answering the question involves making arbitrary value choices I don't understand how there can possibly be an obvious answer.
    2CarlShulman15y
    There isn't for agents in general, but most humans will in fact trade off probabilities of big bads (death, torture, etc) against minor harms, and so preferring SPECKS indicates a seeming incoherency of values.
    1R_Nebblesworth15y
    Thanks for the patient explanation.
    0thomblake15y
    I'd just like to note that comments informed by the relevant background but condescending and negative are often voted down as well. Though Annoyance seems to have relatively high karma anyway.
    0CarlShulman15y
    I considered that, which is why I said that the responses would be "better."
    0bogus15y
    I agree. See DS3618 for a crystal-clear example.
    1thomblake15y
    I strongly doubt that person counts as "informed by the relevant background".
    1CarlShulman15y
    I don't think that case is crystal-clear, could you explain this a bit more? Looking at DS3618's comments, he (I estimate gender based on writing style and the demographics of this forum and of the CMU PhD program he claims to have entered) had some good (although obvious) points regarding peer-review and Flare. Those comments were upvoted. The comments that were downvoted seem to have been very negative and low in informed content. He claimed that calling intelligent design creationism "creationism" was "wrong" because ID is logically separable from young earth creationism and incorporates the idea of 'irreducible complexity.' However, arguments from design, including forms of 'irreducible complexity' argument, have been creationist standbys for centuries. Rudely chewing someone out for not defining creationism in a particular narrow fashion, the fashion advanced by the Discovery Institute as part of an organized campaign to evade court rulings, does deserve downvoting. Suggesting that the Discovery Institute, including Behe, isn't a Christian front group is also pretty indefensible given the public info on it (e.g. the "wedge strategy" and numerous similar statements by DI members to Christian audiences that they are a two-faced organization). This comment implicitly demanded that no one note limitations of the brain without first building AGI, and was lacking in content. DS3618 also claims to have a stratospheric IQ, but makes numerous spelling and grammatical errors. Perhaps he is not a native English speaker, but this does shift probability mass to the hypothesis that he is a troll or sock puppet. He says that he entered the CMU PhD program without a bachelor's degree based on industry experience. This is possible, as CMU's PhD program has no formal admissions requirements according to its document. However, given base rates, and the context of the claim, it is suspiciously convenient and shifts further probability mass towards the troll hypothesis. I sup

    The obvious answer is that torture is preferable.

    If you have to pick yourself a chance of 1/3^^^3 of 50 years torture vs the dust spec you will pick the torture.

    We actually do this every day: we eat foods that can poison us rather than be hungry, we cross the road rather than stay at home, etc.

    Imagine there is a safety improvement to your car that will cost 0.0001 cent but will save you from an event that will happen once in 1000 universe lifetimes would you pay for it?

    8thomblake14y
    I don't think it's very controversial that TORTURE is the right choice if you're maximizing overall net utility (or in your example, maximizing expected utility). But some of us would still choose SPECKS.

    Very-Related Question: Typical homeopathic dilutions are 10^(-60). On average, this would require giving two billion doses per second to six billion people for 4 billion years to deliver a single molecule of the original material to any patient.

    Could one argue that if we administer a homeopathic pill of vitamin C in the above dilution to every living person for the next 3^^^3 generations, the impact would be a humongous amount of flu-elimination?

    If anyone convinces me that yes, I might accept to be a Torturer. Otherwise, I assume that the negligibility of the speck, plus people's resilience, would make no lasting effects. Disutility would vanish in miliseconds. If they wouldn't even notice or have memory of the specks after a while, it'd equate to zero disutility.

    It's not that I can't do the maths. It's that the evil of the speck seems too diluted to do harm.

    Just like homeopathy is too diluted to do good.

    3RobinZ14y
    Easily. 3^^^3 = 3^^27 = 3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3 is so much larger than 10^60 that it is almost certain that many people will receive significant doses of vitamin C. Heck, 3^3^3^3^3^3 ~= 8.719e115 >> 10^60, and that's merely 3^^6. If there is any causal relationship at all between receiving a dose of vitamin C and flu resistance (which I believe you imply for the purposes of the question), then a tremendous number of people will be protected from the flu -- much, much in excess of 3^^26.
    1ABranco14y
    Not what I said. Each person will receive vitamin C diluted in the ratio of 10^(-60) (see reference here). The amount is the same for everyone, constant. Strictly one dose per person (as it was one speck per person). But the number of persons are all people alive in the next 3^^^3 generations. ...which wouldn't mean it is linear at all. Above a certain dose can be lethal; below, can have no effect. ---------------------------------------- Does it sound reasonable that if you eat one nanogram of bread during severe starvation, it would retard your death in precisely zero seconds?
    2RobinZ14y
    But each patient receives less than 10^60 molecules -- one must assume some probability distribution on the number of molecules if we are to suppose any medication is delivered at all. Assuming the dilutions are performed as prescribed in a typical homeopathic preparation, a minuscule fraction will randomly have significantly more than the expected concentration, but even so at least the logarithm of the fraction will be on an order of magnitude with the logarithm of 10^-60 -- and therefore will still multiply to a tremendous number in 3^^^3 cases. That said, even if you assume that the distribution is exactly as even as possible -- every patient receives either zero or one molecule of vitamin C -- there will be a minuscule probability that the effect of that one molecule will be at the tipping point. Truly minuscule -- probably on the order of 10^-20 to 10^-25, a few in one Avogadro's number -- but this still corresponds to aiding 1 in 10^80 to 10^85 people, which multiplies to a tremendous number in 3^^^3 cases.
    0ABranco14y
    Mathematically, I have to agree with your reply: you either have no molecules or at least one. And then, your calculations hold true. And I'm wrong. Physiologically, though, my argument is that the "nanoutility" that this molecule would add would have such a negligible effect that nothing would change in the person's life measured by any practical purposes. It will pass completely unnoticed (zero!) — for each person in the 3^^^3 generations. I assume a fuzzy scale of flu, so that no single molecule would turn sure-flu to sure-non-flu. As I assumed with the specks.
    1RobinZ14y
    Even if you perform the more sophisticated analysis, the probability of the flu should shift slightly -- and that slightly will be on the order of 10^-23, as before. And that times 3^^^3...
    4pengvado14y
    No. You use energy at some finite rate (I'll assume 2000 kilocalories/day, dunno how much starvation affects this). A nanogram of bread contains a nonzero amount of energy (~2.5 microcalories). So it increases your life expectancy by a nonzero time (~100 nanoseconds). A similar analysis can be performed for anything down to and including a single molecule.
    3Nick_Tarleton14y
    That's not really the point. The "dust speck" just means the mildest possible harm that a person can suffer; if you don't think a dust speck with no long-term consequences can be harmful, you should mentally substitute a stubbed toe (with no long-term consequences) or the like.

    I doubt anybody's going to read a comment this far down, but what the heck.

    Perhaps going from nothing to a million dust specks isn't a million times as bad as going from nothing to one dust speck. One thing is certain though: going from nothing to a million dust specks is exactly as bad as going from nothing to one dust speck plus going from one dust speck to two dust specks etc.

    If going from nothing to one dust speck isn't a millionth as bad as nothing to a million dust specks, it has to be made up somewhere else, like going from 999,999 to a million dust... (read more)

    Ask this to yourself to make the question easier. What would you prefer, getting 3^^^3 dust specks in your eye or being hit with a spiked whip for 50 years.

    You must live long enough to feel the 3^^^3 specks in your eye, and each one lasts a fraction of a second. You can feel nothing else but that speck in your eye.

    So, it boils down to this question. Would you rather be whipped for 50 years or get specks in your eye for over a googleplex of years.

    If I could possible put a marker of the utility of bad that a speck of dust in the eye is, and compare that to... (read more)

    0Alicorn14y
    I asked this here.
    [-][anonymous]14y00

    In the real world the possiblity of torture obviously hurts more people than just the person being tortured. By theorizing about the utility of torture you are actually subjecting possibly billions of people to periodic bouts of fear and pain.

    Forgive me if this has been covered before. The internet here is flaking out and it makes it hard to search for answers.

    What is the correct answer to the following scenario: Is it preferable to have one person be tortured if it gives 3^^^3 people a miniscule amount of pleasure?

    The source of this question was me pondering the claim, "Pain is temporary; a good story lasts forever."

    0Blueberry14y
    Great question, and if it has been covered before on this site, I haven't seen it. Philosophers have discussed whether or not "sadistic" pleasure from others' suffering should be included in utilitarian calculations, and in fact this is one of the classic arguments against (some types of) utilitarianism, along with the utility monster and the organ lottery. One possible answer is that utilitarians should maximize other terminal values besides just pleasure, and that sadistic pleasures like this go against the total of our terminal values, so utilitarians shouldn't allow these to cancel out torture.
    2wedrifid14y
    Yes.
    [-][anonymous]14y10

    So, I'm very late into this game, and not through all the sequences (where the answer might already be given), but still, I am very interested in your positions (probably nobody answers, but who knows):

    1. Is there a natural number N for which you'd kill one person vs. giving N people one-single dust-speck? (I assume this depends on whether one expects an ever-lasting universe.)
    2. Do you "integrate" utility over time (or "experience-moments", as per timeless bla), or is it better to just maximize the "final" point, however one got
    ... (read more)
    0AlephNeil14y
    I don't think this question (or one discussed in the OP) admit meaningful answers. It seems a pity to just 'pour cold water over them' but I don't know what else to say - whatever 'moral truths' there are in the world simply don't reach as far as such absurd scenarios. Depends what game you're playing, surely. If you're playing 'Invest For Retirement' and the utility function measures the size of your retirement fund, then naturally the 'final' point is what matters. On the other hand, if you're playing 'Enjoy Your Retirement' and the utility function measures how much money you have to spend on a monthly basis, then what's important is the "integrated" utility. Two points of interest here: (1) Final utility in 'Invest for retirement' equals integrated utility in 'Enjoy your retirement' (modulo some faffing around with discount rates). (2) The game of 'Enjoy your retirement' is notable insofar as it's a game with a guaranteed final utility of zero (or -infinity if you prefer).

    I'd gladly get a speck of dust in my eye as many times as I can, and I'm sure those 3^^^3 people would join me, to keep one guy from being tortured for 50 years.

    7Vladimir_Nesov14y
    Maybe you will indeed, but should you?
    0RobinZ14y
    Suppose some fraction of the 3^^^3 dropped out. How many dust specks would you be willing to take? Two? Ten? A thousand? A million? A billion? That's half a millimeter in diameter, now, and we're only at 10^9. How about 10^12? 10^15? 10^18? We're around half a meter in diameter now, approaching or exceeding the size of a football, and we've not even reached 3^^4 - and remember that 3^^^3 is 3^^3^^3 = 3^^7,625,597,484,987. What, you think that all of the 3^^^3 will go for it? All of them, chipping in to save one person who was getting 50 years of torture? In a universe with 3^^^3 people in it, how many people do you think are being tortured? Our planet has had around 10^11 human beings in history. If we say that only one of those 10^11 people were ever tortured for 50 years in history - or even that there were a one-in-a-thousand chance of it, one in 10^14 - how many people would be tortured for 50 years among the more than 3^^^3 we are positing? And do you think that all 3^^^3 will choose the same one you did? Would you consider think that, perhaps, one dust speck is a bit much to pay to save one part in 3^^^3 of a victim?
    3Vladimir_Nesov14y
    When multiple agents coordinate, their decision delivers the whole outcome, not a part of it. Depending on what you decide, everyone who reasons similarly will decide. Thus, you have the absolute control over what outcome to bring about, even if you are only one of a gazillion like-minded voters. Here, you decide whether to save one person, at the cost of harming 3^^^3 people. This is not equivalent to saving 1/3^^^3 of a person at the cost of harming one person, because the saving of 1/3^^^3 of a person is not something that actually could happen, it is at best utilitarian simplification, which you must make explicit and not confuse for a decision-theoretic construction.
    0RobinZ14y
    If it were a one-shot deal with no cheaper alternative, I could see agreeing. But that still leaves the other 3^^^3/10^14 victims and this won't scale to deal with those.
    0Nick_Tarleton14y
    This seems to work nearly as well for any harm less than being tortured for 50 years — say, being tortured for 25 years.
    4cousin_it14y
    I wouldn't volunteer for 25 years of torture to save a random person from 50. A relative, maybe.

    "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

    I think the answer is obvious. How about you?"

    Yes, Eliezer, the answer is obvious. The answer is that this is a false dilemma, and that I should go searching for the third alternative, with neither 3^^^3 dust specks nor 50 years of torture. These are not optimal alternatives.

    Construct a thought experiment in which every single one of those 3^^^3 is asked whether he would accept a dust speck in the eye to save someone from being tortured, take the answers as a vote. If the majority would deem it personally acceptable, then acceptable it is.

    3benelliott13y
    This doesn't work at all. If you ask each of them to make that decision you are asking to compare their one dust speck, with somebody else's one instance of torture. Comparing 1 dust speck with torture 3^^^3 times is not even remotely the same as comparing 3^^^3 dust specks with torture. If you ask me whether 1 is greater than 3 I will say no. If you ask me 5 times I will say no every time. But if you ask me whether 5 is greater than 3 I will say yes. The only way to make it fair would be to ask them to compare themselves and the other 3^^^3 - 1 getting dust specks with torture, but I don't see why asking them should get you a better answer than asking anyone else.
    0topynate13y
    Compare two scenarios: in the first, the vote is on whether every one of the 3^^^3 people are dust-specked or not. In the second, only those who vote in favour are dust-specked, and then only if there's a majority. But these are kind of the same scenario: what's at stake in the second scenario is at least half of 3^^^3 dust-specks, which is about the same as 3^^^3 dust-specks. So the question "would you vote in favour of 3^^^3 people, including yourself, being dust-specked?" is the same as "would you be willing to pay one dust-speck in your eye to save a person from 50 years of torture, conditional on about 3^^^3 other people also being willing?"
    -1benelliott13y
    Let me try and get this straight, you are presenting me with a number of moral dilemmas and asking me what I would do in them. 1) Me and 3^^^^3 - 1 other people all vote on whether we get dust specks in the eye or some other person gets tortured. I vote for torture. It is astonishingly unlikely that my vote will decide, but if it doesn't then it doesn't matter what I vote, so the decision is just the same as if it was all up to me. 2) Me and 3^^^^3 - 1 other people all vote on whether everyone who voted for this option gets a dust speck in the eye or some other person gets tortured. This is a different dilemma, since I have to weigh up three things instead of two, the chance that my vote will save about 3^^^^3 people from being dust-specked if I vote for torture, the chance that my vote will save on person from being tortured if I vote for dust specks and the (much higher) chance that my vote will save me and only me from being dust-specked if I vote for torture. I remember reading somewhere that the chance of my vote being decisive in such a situation is roughly proportional to the square root of the number of people (please correct me if this is wrong). Assuming this is the case then I still vote for torture, since the term for saving everyone else from dust specks still dwarfs the other two. 3) I have to choose whether I will receive a dust speck or whether someone else will be tortured, but my decision doesn't matter unless at least half of 3^^^^3 - 1 other people would be willing to choose the dust speck. Once again the dilemma has changed, this time I have lost my ability to save other people from dust specks and the probability of me successfully saving someone from torture has massively increased. I can safely ignore the case where the majority of others choose torture, since my decision doesn't matter then. Given that the others choose dust specks, I am not so selfish as to save myself from a dust speck rather than someone else from torture. You try
    0topynate13y
    Assuming a roughly 50-50 split the inverse square-root rule is right. Now my issue is why you incorporate that factor in scenario 2, but not scenario 3. I honestly thought I was just rephrasing the problem, but you seem to see it differently? I should clarify that this isn't you unconditionally receiving a speck if you're willing to, but only if half the remainder are also so willing. The point of voting, for me, is not an attempt to induce scope insensitivity by personalizing the decision, but to incorporate the preferences of the vast majority (3^^^^3 out of 3^^^^3 + 1) of participants about the situation they find themselves in, into your calculation of what to do. The Torture vs. Specks problem in its standard form asks for you to decide on behalf of 3^^^^3 people what should happen to them; voting is a procedure by which they can decide. [Edit: On second thought, I retract my assertion that scenario 1) and 2) have roughly the same stakes. That in scenario 1) huge numbers of people who prefer not to be dust-specked can get dust-specked, and in scenario 2) no one who prefers not to be dust-specked is dust-specked, makes much more of a difference than a simple doubling of the number of specks.] By the way, the problem as stated involves 3^^^3, not 3^^^^3, people, but this can't possibly matter so nevermind.
    0benelliott13y
    There are actually two differences between 2 and 3. The first is that in 2 my chance of affecting the torture is negligible, whereas in 3 it is quite high. The second difference is that in 2 I have the power to save huge numbers of others from dust specks, and it is this difference which is important to me, since when I have that power it dwarfs the other factors so much as to be the only deciding factor in my decision. In your 'rephrasing' of it you conveniently ignore the fact that I can still do this, so I assumed I no longer could, which made the two scenarios very different. I think also, as a general principle, any argument of the type you are formulating which does not pay attention to the specific utilities of torture and dust-specks, instead just playing around with who makes the decision, can also be used to justify killing 3^^^^3 people to save one person from being killed in a slightly more painful manner.
    0HonoreDB13y
    The point of Torture vs. Dust Specks is that our moral intuition dramatically conflicts with strict utilitarianism. Your thought experiment helps express your moral intuition, but it doesn't do anything to resolve the conflict. Although, come to think of it, I think there's an argument to be made that the majority would answer no. If we interpret 3^^^3 people to mean qualitatively distinct individuals, there's not enough room in humanspace for all of those people to be human--the vast majority will be nonhumans. It can be argued, at least, that if you pick a random nonhuman individual, that individual will not be altruistic towards humans.
    2ata13y
    How about each of those 3^^^3 is asked whether they would accept a dust speck in the eye to save someone from 1/3^^^3 of 50 years of torture, and everyone's choice is granted? (i.e. the ones who say they'd accept a dust speck get a dust speck, and the person is tortured for an amount of time proportional to the number of people who refused.) I'm not quite sure what I'd expect to have happen in that case. That's harder than the moral question because we have to imagine a world that actually contains 3^^^3 different (i.e. not perfectly decision-theoretically correlated) people, and any kind of projection about that kind of world would pretty much be making stuff up. But as for the moral question of what a person in this situation should say, I'd say the reasoning is about the same — getting a dust speck in your eye is worse than 50/3^^^3 years of torture, so refuse the speck. (That's actually an interesting way of looking at it, because we could also put it in terms of each person choosing whether they get specked or they themselves get tortured for 50/3^^^3 years, in which case the choice is really obvious — but if you're still working with 3^^^3 people, and they all go with the infinitesimal moment of torture, that still adds up to a total 50 years of torture.) Edit: Actually, for that last scenario, forget 50/3^^^3 years, that's way less than a Planck interval. So let's instead multiply it by enough for it to be noticeable to a human mind, and reduce the intensity of the torture by the same factor.

    Interesting question. I think a similar real-world situation is when people cut in line.

    Suppose there is a line of 100 people, and the line is moving at a rate of 1 person per minute.

    Is it ok for a new person to cut to the front of the line, because it only costs each person 1 extra minute, or should the new person stand at the back of the line and endure a full 100 minute wait?

    Of course, not everyone in line endures the same wait duration; a person near the front will have a significantly shorter wait than a person near the back. To address that issue o... (read more)

    0Vaniver13y
    This is one of the reasons why utilitarianism makes me cringe. "We can do first-order calculations and come up with a good answer! What could go wrong?"
    [-][anonymous]13y00

    I would prefer the dust motes, and strongly. Pain trumps inconvenience.

    And yet...we accept automobiles, which kill tens of thousands of people per year, to avoid inconvenience. (That is, automobiles in the hands of regular people, not just trained professionals like ambulance drivers.) But it's hard to calculate the benefits of having a vehicle.

    Reducing the national speed limit to 30mph would probably save thousands of lives. I would find it unconscionable to keep the speed limit high if everyone were immortal. At present, such a measure would trade lives for parts of lives, and it's a matter of math to say which is better...though we could easily rearrange our lives to obviate most travel.

    2wedrifid13y
    I had to read that twice before I realised you meant "immortal like an elf" rather than "immortal like Jack Harkness and Connor MaCleod".

    Idea 1: dust specks, because on a linear scale (which seems to be always assumed in discussions of utility here) I think 50 years of torture is more than 3^^^3 times worse than a dust speck in one's eye.

    Idea 2: dust specks, because most people arbitrarily place bad things into incomparable categories. The death of your loved one is deemed to be infinitely worse than being stuck in an airport for an hour. It is incomparable; any amount of 1 hour waits are less bad than a single loved one dying.

    0ata13y
    How much would you have to decrease the amount of torture, or increase the number of dust specks, before the dust specks would be worse?
    -1rstarkov13y
    I don't know. I don't suppose you claim to know at which point the number of dust specks is small enough that they are preferable to 50 years of torture? (which is why I think that Idea 2 is a better way to reason about this)
    [-][anonymous]13y00

    I think it might be interesting to reflect on the possibility that among the 3^^^3 dust speck victims there might be a smaller-but-still-vast number of people being subjected to varying lengths of "constantly-having-dust-thrown-in-their-eyes torture". Throwing one more dust speck at each of them is, up to permuting the victims, like giving a smaller-but-still-vast number of people 50 years of dust speck torture instead of leaving them alone.

    (Don't know if anyone else has already made this point - I haven't read all the comments.)

    These ethical questions become relevant if we're implementing a Friendly AI, and they are only of academic interest if I interpret them literally as a question about me.

    If it's a question about me, I'd probably go with the dust specs. A small fraction of those people will have time to get to me, and of those, none of those people are likely to bother me if it's just a dust speck. If I were to advocate the torture, the victim or someone who knows him might find me and try to get revenge. I just gave you a data point about the psychology of one unmodifie... (read more)

    [-][anonymous]13y80

    I wonder if some people's aversion to "just answering the question" as Eliezer notes in the comments many times has to do with the perceived cost of signalling agreement with the premises.

    It's straightforward to me that answering should take the question at face value; it's a thought experiment, you're not being asked to commit to a course of action. And going by the question as asked the answer for any utilitarian is "torture", since even a very small increment of suffering multiplied by a large enough number of people (or an infinite... (read more)

    I choose the specks. My utility function u(what happens to person 1, what happens to person 2, ..., what happens to person N) doesn't equal f_1(what happens to person 1) + f_2(what happens to person 2) + ... + f_N(what happens to person N) for any choice of f_1, ..., f_N, not even allowing them to be different; in particular, u(each of n people gets one speck in their eye) doesn't approaches a finite limit as n approaches infinity, and this limit is less negative than u(one person gets tortured than 50 years)

    I spent quite a while thinking about this one, and here is my "answer".

    My first line of questioning is "can we just multiply and compare the sufferings ?" Well, no. Our utility functions are complicated. We don't even fully know them. We don't exactly know what are terminal values, and what are intermediate values in them. But it's not just "maximize total happiness" (with suffering being negative happiness). My utility function also values things like fairness (it may be because I'm a primate, but still, I value it). The &quo... (read more)

    [-][anonymous]12y60

    Let me attempt to shut up and multiply.

    Let's make the assumption that a single second of torture is equivalent to 1 billion dust specks to the eye. Since that many dust specks is enough to sandblast your eye, it seems reasonable approximation.

    This means that 50 years of this torture is equivalent to giving 1 single person (50 365.25 24 60 60 * 1,000,000,000) dust specks to the eye.

    According to Google's calculator,

    (50 365.25 24 60 60 1,000,000,000)/(3^39) = 0.389354356 (50 365.25 24 60 60 1,000,000,000)/(3^38) = 1.16806307

    Ergo, If someone co... (read more)

    [-][anonymous]12y00

    I tentatively like to measure human experience with logarithms and exponentials. Our hearing is logarithmic, loudness wise, hence the unit dB. Human experiences are rarely linear, thus is it is almost never true that f(x*a) = f(x)*a.

    In the above hypothetical, we can imagine the dust specks and the torture. If we propose that NO dust speck ever does anything other than cause mild annoyance, never one enters the eye of a driver who blinks at an inopportune time and crashes; then I would propose we can say: awfulness(pain) = k^pain.

    A dust speck causes approxi... (read more)

    [This comment is no longer endorsed by its author]Reply

    My utility function has non-zero terms for preferences of other people. If I asked each one of the 3^^^3 people whether they would prefer a dust speck if it would save someone a horrible fifty-year torture, they (my simulation of them) would say YES in 20*3^^^3-feet letters.

    2A1987dM12y
    Conversely, if you asked somebody if they'd be willing to be tortured for 50 years in order to save 3^^^3 people from getting each a dust speck in the eye, they'd likely say NO FREAKIN' WAY!!!. BTW, welcome to Less Wrong -- you can introduce yourself in the welcome thread.
    4TheOtherDave12y
    If I asked each of a million people if they would give up a dollar's worth of value if it would give an arbitrarily selected person ten thousand dollars' worth, and they each said yes, would it follow that destroying a million dollar's worth of value in exchange for ten thousand dollars' worth was a good idea? If, additionally, my utility function had non-zero terms for the preferences of other people, would it follow then?
    0Maelin12y
    I feel like this is misinterpreting gRR's comment. gRR is not claiming that nonutilitarian choices are preferable because the utility function has non-zero terms for preferences of other people. It is a necessary condition, but not a sufficient one. My model of other people says that a significantly smaller percentage of people would accept losing a dollar in order to grant one person ten grand, than would accept a dust speck in order to save one person 50 years of torture.
    0TheOtherDave12y
    As does mine. That's consistent with my understanding of their claim as well. Can you expand further on why you feel like this?
    0Maelin12y
    Sure, although updating upon reading your response, I now suspect that I have misinterpreted your comment. But I'll explain how I saw things when I first commented. Basically it looked like you were perceiving gRR's argument as a specific instance of the following general argument: You were then trying to reveal the fault in gRR's general argument by presenting a different example ($1m -> $10k) and asking if the same argument would still hold there (which you presume it wouldn't). Then you suggested throwing another premise, (1b) I have nonzero terms for others' preferences, and presumably replacing (2a) by (2b) which adds the requirement of (1b), and asking if that would make the argument hold. But gRR was not asserting that general argument - in particular, not premise (2a)/(2b). So it seemed like you seemed to be trying to tear down an argument that gRR was not constructing.
    1gRR12y
    It wouldn't follows that it is a good idea, or efficient idea. But it would follow that it is the preferred idea, as calculated by my utility function that has non-zero terms for preferences of other people. Fortunately, my simulation of other people doesn't suddenly wish to help an arbitrary person by donating a dollar with 99% transaction cost.
    0TheOtherDave12y
    Hm. As with Maelin's comment above, I seem to agree with every part of this comment, but I don't understand where it's going. Perhaps I missed your original point altogether.
    0gRR12y
    My point was that the "SPECKS!!" answer to the original problem, which is intuitively obvious to (I think) most people here, is not necessarily wrong. It can directly follow from expected utility maximization, if the utility function values the choice of people, even if the choice is "economically" suboptimal.
    3TimS12y
    A substantial part of talking about utility functions is to assert we are trying to maximize something about utility (total, average, or whatnot). It seems very strange to say that we can maximize utility by being inefficient in our conversion of other resources into utility. If your goal is to avoid certain "efficient" conversations for other reasons, then it doesn't make a lot of sense to say that you are really trying to implement a utility function. In other words, Walzer's Spheres of Justice concept, which states that some trade-offs are morally impermissible, is not really implementable in a utility function. To the extent that he (or I) might be modeled by a utility function, there are inevitably going to be errors in what the function predicts I would want or very strange discontinuities in the function.
    2gRR12y
    But I am trying to maximize the total utility, just a different one. Ok, let me put it this way. I will drop the terms for other people's preferences from my utility function. It is now entirely self-centered. But it still values the good feeling I get if I'm allowed to participate in saving someone from fifty years of torture. The value of this feeling if much more than the miniscule negative utility of a dust speck. Now, assume some reasonable percent of the 3^^^3 people are like me in this respect. Maximizing the total utility for everybody results in: SPECKS!! Now an objection can be stated that by the conditions of the problem, I cannot change the utilities of the 3^^^3 people. They are given and are equal to a miniscule negative value corresponding to the small speck of dust. Evil forces give me the sadistic choice and don't allow me to share the good news with everyone. Ok. But I can still imagine what the people would have preferred if given a choice. So I add a term for their preference to my utility function. I'm behaving like a representative of people in a government. Or like a Friendly AI trying to implement their CEV. My arguments have nothing to do with Walzer's Spheres of Justice concept, AFAICT.
    0TimS12y
    The point of picking a number the size of 3^^^3 is that it is so large that this statement is false. Even if 99% are like you, I can keep adding ^ and falsify the statement. If utility is additive at all, torture is the better choice. My reference to Walzer was simply to note that many interesting moral theories exist that do not accept that utility is additive. I don't accept that utility is additive.
    1gRR12y
    Why would it ever be false, no matter how large the number? Let S = negated disutility of speck, a small positive number. Let F = utility of good feeling of protecting someone from torture. Let P = the fraction of people who are like me (for whom F is positive), 0 < P <= 1. Then the total utility for N people, no matter what N, is N(PF - S), which is >0 as long as P*F > S. Well, we can agree that utility is complicated. I think it's possible to keep it additive by shifting complexities to the details of its calculation.
    1TimS12y
    This knowledge among the participants is adding to the thought experiment. The original question: You are asking: Notice how your formulation has 3^^^3 in both options, while the original question does not.
    0gRR12y
    Yes, I stated and answered this exact objection two comments ago.
    2TimS12y
    I have come to believe that - like a metaphorical Groundhog Day - every conversation on this topic is the same lines from the same play, with different actors. This is the part of the play where I repeat more forcefully that you are fighting the hypo, but don't seem to be realizing that you are fighting the hypo. In the end, the lesson of the problem is not about the badness of torture or what things count as positive utility, but about learning what commitments you make with various assertions about the way moral decisions should be made.
    0fubarobfusco12y
    It sounds to me as if you're asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.
    1wedrifid12y
    That doesn't seem unreasonable. THat knowledge is probably worse than the speck.
    3fubarobfusco12y
    Sure, it does have the odd implication that discovering or publicizing unpleasant truths can be morally wrong, though.
    2TimS12y
    That's a really good point. Does the "repugnant conclusion" problem for total utilitarians imply that they think informing others of bad news can be morally wrong in ordinary circumstances? Or just the product of a poor definition of utility? I take it as fairly uncontroversial that a benevolent lie when no changes in decision by the listener are possible is morally acceptable. That is, falsely saying "Your son survived the plane crash" to the father who is literally moments from dying seems morally acceptable because the father isn't going to decide anything differently based on that statement. But that's an unusual circumstance, so I don't think it should trouble us. Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
    0fubarobfusco12y
    Agreed. Lying to others to manipulate them deprives them of the ability to make their own choices — which is part of complex human values — but in this case the father doesn't have any relevant choice to deprive him of. Not that I can tell. I suppose another way of looking at this is a collective-action or extrapolated-volition problem. Each individual in the SPECKS case might prefer a momentary dust speck over the knowledge that their momentary comfort implied someone else's 50 years of torture. However, a consequentialist agent choosing TORTURE over SPECKS is doing so in the belief that SPECKS is actually worse. Can that agent be implementing the extrapolated volition of the individuals?
    1fezziwig12y
    I don't realize it either; I'm not sure that it's true. Forgive me if I'm missing something obvious, but: * gRR wants to include the preferences of the people getting dust-specked in his utility function. * But as you point out, he can't; the hypothetical doesn't allow it. * So instead, he includes his extrapolation of what their preferences would be if they were informed, and attempts to act on their behalf. You can argue that that's a silly way to construct a utility function (you seem to be heading that way in your third paragraph), but that's a different objection.
    1TimS12y
    If you want to answer a question that isn't asked by the hypothetical, you are fighting the hypo. That's basically the paradigmatic example of "fighting the hypo." I think gRR has the right answer to the question he is asking. But it is a different one that Eliezer was asking, and teaches different lessons. To the extent that gRR thinks he has rebutted the lessons from Eliezer's question, he's incorrect.
    1gRR12y
    I'm not sure why do you think I'm asking a different question. Do you mean to say that in the original Eliezer's problem all of the utilities are fixed, including mine? But then, the question appears entirely without content: "Here are two numbers, this one is bigger than that one, your task is to always choose the biggest number. Now which number do you choose?" Besides, if this is indeed what Eliezer meant, then his choice of "torture" for one of the numbers is inconsistent. Torture always has utility implications for other people, not just the person being tortured. I hypothesize that this is what makes it different (non-additive, non-commeasurable, etc) for some moral philosophers.
    0TimS12y
    As fubarobfusco pointed out, your argument includes the implication that discovering or publicizing unpleasant truths can be morally wrong (because the participants were ignorant in the original formulation). It's not obvious to me that any moral theory is committed to that position. And without that moral conclusion, I think Eliezer is correct that a total utilitarian is committed to believing that choosing TORTURE over SPECKS maximizes total utility. The repugnant conclusion really is that repugnant. All of that was not an obvious result to me.
    2gRR12y
    Any utility function that does not give an explicit overwhelmingly positive value to truth, and does give an explicit positive value to "pleasure" would obviously include the implication that discovering or publicizing unpleasant truths can be morally wrong. I don't see why it is relevant. If all the utilities are specified by the problem text completely, then TORTURE maximizes the total utility by definition. There's nothing to be committed about. But in this case, "torture" is just a label. It cannot refer to a real torture, because a real torture would produce different utility changes for people.
    0TheOtherDave12y
    Well, OK, sure, but... can't anything follow from expected utility maximization, the way you're approaching it? For all (X, Y), if someone chooses X over Y, that can directly follow from expected utility maximization, if the utility function values X more than Y. If that means the choice of X over Y is not necessarily wrong, OK, but it seems therefore to follow that no choice is necessarily wrong. I suspect I'm still missing your point.
    6gRR12y
    Given: a paradoxical (to everybody except some moral philosophers) answer "TORTURE" appears to follow from expected utility maximization. Possibility 1: the theory is right, everybody is wrong. But in the domain of moral philosophy, our preferences should be treated with more respect than elsewhere. We cherish some of our biases. They are what makes us human, we wouldn't want to lose them, even if sometimes they give "inefficient" answer from the point of view of simplest greedy utility function. These biases are probably reflexively consistent - even if we knew more, we would still wish to have them. At least, I can hypothesize that they are so, until proven otherwise. Simply showing me the inefficiency doesn't make me wish not to have the bias. I value efficiency, but I value my humanity more. Possibility 2: the theory (expected utility maximization) is wrong. But the theory is rather nice and elegant, I wouldn't wish to throw it away. So, maybe there's another way to fix the paradox? Maybe, something wrong with the problem definition? And lo and behold - yes, there is. Possibility 3: the problem is wrong As the problem is stated, the preferences of 3^^^3 people are not taken into account. It is assumed that the people don't know and will never know about the situation - because their total utility change regarding the whole is either nothing or a single small negative value. If people were aware of the situation, their utility changes would be different - a large negative value from knowing about the tortured person's plight and being forcibly forbidden to help, or a positive value from knowing they helped. Well, there would also be a negative value from moral philosophers who would know and worry about inefficiency, but I think it would be a relatively small value, after all. Unfortunately, in the context of the problem, the people are unaware. The choice for the whole humanity is given to me alone. What should I do? Should I play dictator and make a ch
    0TheOtherDave12y
    OK. I think I understand you now. Thanks for clarifying.

    The mathematical object to use for the moral calculations needs not be homologous to real numbers.

    My way of seeing it is that the speck of dust barely noticeable will be strictly smaller than torture no matter how many instances of speck of dust happen. That's just how my 'moral numbers' operate. The speck of dust equals A>0, the torture equals B>0, the A*N<B holds for any finite N . . I forbid infinities (the number of distinct beings is finite).

    If you think that's necessarily irrational you have a lot of mathematics to learn. You can start with... (read more)

    2TimS12y
    If I understand correctly, then I agree with you. But this viewpoint has consequences.
    0Dmytry12y
    The linked post still assumes that discomfort space is one dimensional, which it needs not be. The decision outcomes do need to behave like comparison does (if a>b and b>c it must follow that a>c) but that's about it. Bottom line is, we can't very well reflect on how we think about this issue, so its hard to come up with some model that works the same as your head, and which you can reflect on, calculate with computer, etc. By the way, consider a being made of 10^30 parts with 10^30 states each. That's quite big being, way bigger than human. The number of distinct states of such being is (10^30)^(10^30) = 10^(30*10^30) , which is unimaginably smaller than 3^^^3 . You can pick beings that are to humans as humans are to amoeba, repeated many times, and still be waaay short of 3^^^3. The guys who chose torture, congrats on also having a demonstrable reasoning failure when reasoning about huge numbers. edit: embarrassing math glitch of my own. It is difficult to reason about huge numbers and easy to miss something, such as number of 'people' exceeding number of possible human mind states by unimaginably far.

    Choosing TORTURE is making a decision to condemn someone to fifty years of torture, while knowing that 3^^^3 people would not want you to do so, would beg you not to, would react with horror and revulsion if/when they knew you did it. And you must do it for the sake of some global principle or something. I'd say it puts one at least into Well-intentioned Extremist / KnightTemplar category, if not outright villain.

    If an AI had made a choice like that, against known wishes of practically everyone, I'd say it was rather unfriendly.

    ADDED: Detailed

    People who choose torture, if the question was instead framed as the following would you still choose torture?

    "Assuming you know your lifespan will be at least 3^^^3 days, would you choose to experience 50 years worth of torture, inflicted a day at a time at intervals spread evenly across your life span starting tomorrow, or one dust speck a day for the next 3^^^3 days of your life?"

    2ArisKatsaris12y
    I've heard this rephrasing before but it means less than you might think. Human instinct tells us to postpone the bad as much as possible. Put aside the dustspeck issue for the moment: let's compare torture to torture. I'd be tempted to choose a 1000 years of torture over a single year of torture, if the 1000 years are a few millions of years in the future, but the single year had to start now. Does this fact mean I need concede 1000 years of torture are less bad than a single year? Surely not. It just illustrates human hyperbolic discounting.
    2Nornagest12y
    Clever, but not, I think, very illuminating -- 3^^^3 is just as fantastically, intuition-breakingly huge as it ever was, and using the word "tomorrow" adds a nasty hyperbolic discounting exploit on top of that. All the basic logic of the original still seems to apply, and so does the conclusion: if a dust speck is in any way commensurate with torture (a condition assumed by the OP, but denied by enough objections that I think it's worth pointing out explicitly), pick Torture, otherwise pick Specks. One of the frustrating things about the OP is that most of the objections to it are based on more or less clever intuition pumps, while the post itself is essentially making a utilitarian case for ignoring your intuitions. Tends to lead to a lot of people talking past each other.
    1TheOtherDave12y
    I would almost undoubtedly choose a dust speck a day for the rest of my life. So would most people. The question remains whether that would be the right choice... and, if so, how to capture the principles underlying that choice in a generalizable way. For example, in terms of human intuition, it's clear that the difference between suffering for a day and suffering for five years plus one day is not the same as the difference between suffering for fifty years and suffering for fifty-five years, nor between zero days and five years. The numbers matter. But it's not clear to me how to project the principles underlying that intuition onto numbers that my intuition chokes on.
    0Dmytry12y
    Could it be that in the 50 years worth of torture would also amount to more than a dust spec of daily discomfort caused by having been psychologically traumatized by the torture, for the remaining 3^^^3 days? What if the 50 years of torture come at the end of the lifespan? Istill would rather just take the dust speck now and then though. Nothing forbids me from having a function more nonlinear than 3^^^^[n] 3 , as a messy wired neural network i can easily implement imprecise algebra on the numbers that are far beyond any up arrow notation, or even numbers x,y,z... that are such that any finite integer x < y , any finite integer y < z , and so on . Infinities are not hard to implement at all. Consider comparisons on arrays made like ab[1] . I'm using strings when I need that property in software, so that i can always make some value that will have precedence. edit: Note that one could think of the comparison between real values in above example as comparisons between a[1]*big number + a[2] , which may seem sensible, and then learn of the uparrows, get mind boggled, and reason that the up-arrows in a[2] will be larger than big number. But they never will change outcome of the comparison as per the actual logic where a[1] always matters more than a[2] .
    0TheOtherDave12y
    Sure, if I factor in the knock-on effects of 50 years of torture (or otherwise ignore the original thought experiment and substitute my own) I might come to different results. Leaving that aside, though, I agree that the nature of my utility function in suffering is absolutely relevant here, and it's entirely possible for that function to be such that BIGNUMBER x SMALLSUFFERING is worth less than SMALLNUMBER x BIGSUFFERING even if BIGNUMBER >>>>>> SMALLNUMBER. The key word here is possible though. I don't really know that it is.

    Common sense tells me the torture is worse. Common sense is what tells me the earth is flat. Mathematics tells me the dust specks scenario is worse. I trust mathematics and will damn one person to torture.

    This "moral dilemma" only has force if you accept strict Bentham-style utilitarianism, which treats all benefits and harms as vectors on a one-dimensional line, and cares about nothing except the net total of benefits and harms. That was the state of the art of moral philosophy in the year 1800, but it's 2012 now.

    There are published moral philosophies which handle the speck/torture scenario without undue problems. For example if you accepted Rawls-style, risk-averse choice from a position where you are unaware whether you will be one of the speck... (read more)

    6steven046112y
    Rawls's Wager: the least well-off person lives in a different part of the multiverse than we do, so we should spend all our resources researching trans-multiverse travel in a hopeless attempt to rescue that person. Nobody else matters anyway.
    -1PhilosophyTutor12y
    If this is a problem for Rawls, then Bentham has exactly the same problem given that you can hypothesise the existence of a gizmo that creates 3^^^3 units of positive utility which is hidden in a different part of the multiverse. Or for that matter a gizmo which will inflict 3^^^3 dust specks on the eyes of the multiverse if we don't find it and stop it. Tell me that you think that's an unlikely hypothesis and I'll just raise the relevant utility or disutility to the power of 3^^^3 again as often as it takes to overcome the degree of improbability you place on the hypothesis. However I think it takes a mischievous reading of Rawls to make this a problem. Given that the risk of the trans-multiverse travel project being hopeless (as you stipulate) is substantial and these hypothetical choosers are meant to be risk-averse, not altruistic, I think you could consistently argue that the genuinely risk-averse choice is not to pursue the project since they don't know this worse-off person exists nor that they could do anything about it if that person did exist. That said, diachronous (cross-time) moral obligations are a very deep philosophical problem. Given that the number of potential future people is unboundedly large, and those people are at least potentially very badly off, if you try to use moral philosophies developed to handle current-time problems and apply them to far-future diachronous problems it's very hard to avoid the conclusion that we should dedicate 100% of the world's surplus resources and all our free time to doing all sorts of strange and potentially contradictory things to benefit far-future people or protect them from possible harms. This isn't a problem that Bentham's hedonistic utilitarianism, nor Eliezer's gloss on it, handles any more satisfactorily than any other theory as far as I can tell.

    The dusk speck is a slight irritation. Hearing about somone being tortured is a bigger irritation. Also, pain depends on greatly on concentration. Something that hurts "twice as much" is actually much worse: lets say it is a hundred times worse. Offcourse this levels off (it is a curve) at some point, but in this case that is not problem as we can say that the torture is very close to the physical max and the speck's are very close to the physical minimum pain. The difference between the Speck and the torture is immense. Differense in time... (read more)

    At first, I picked the dust specks as being the preferable answer, and it seemed obvious. What eventually turned me around was when I considered the opposite situation -- with GOOD things happening, rather than BAD things. Would I prefer that one person experience 50 years of the most happiness realistic in today's world, or that 3^^^3 people experience the least good, good thing?

    0shminux12y
    Why do you think that there has to be a symmetry between positive and negative utility?
    0Alicorn12y
    The question has been posed.

    I was very surprised to find that a supporter of the Complexity of Value hypothesis and the author who warns against simple utility functions advocates torture using simple pseudo-scientific utility calculus.

    My utility function has constraints that prevent me from doing awful things to people, unless it would prevent equally awful things done to other people. That this is a widely shared moral intuition is demonstrated by the reaction in the comments section. Since you recognize the complexity of human value, my widely-shared preferences are presumably v... (read more)

    6TheOtherDave12y
    There's something really odd about characterizing "torture is preferable to this utterly unrealizable thing" as "advocating torture." It's not obviously wrong... I mean, someone who wanted to advocate torture could start out from that kind of position, and then once they'd brought their audience along swap it out for simply "torture is preferable to alternatives", using the same kind of rhetorical techniques you use here... but it doesn't seem especially justified in this case. Mostly, it seems like you want to argue that torture is bad whether or not anyone disagrees with you. Anyway, to answer your question: to a total utilitarian, what matters is total utility-change. That includes knock-on effects, including mental discomfort due to hearing about the torture, and the way torturing increases the likelihood of future torture of others, and all kinds of other stuff. So transmitting information about events is itself an event with moral consequences, to be evaluated by its consequences. It's possible that keeping the torture a secret would have net positive utility; it's possible it would have net negative utility. All of which is why the original thought experiment explicitly left the knock-on effects out, although many people are unwilling or unable to follow the rules of that thought experiment and end up discussing more real-world plausible variants of it instead (as you do here). Well, in some bizarre sense that's true. I mean, if I'm being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It's decidedly unclear in what sense it's an event at all.) Sure, that seems likely. I endorse killing someone over allowing a greater amount of bad stuff to happen, if those are my choices. Does that answer your question? (I also reject your implication that killing someone is necessarily worse than torturing them for 50
    1A1987dM12y
    You know, in natural language “x is better than y” often has the connotation “x is good”, and people go at lengths to avoid such wordings if they don't want that connotation. For example, “‘light’ cigarettes are no safer than regular ones” is logically equivalent to “regular cigarettes are at least as safe as ‘light’ ones”, but I can't imagine an anti-smoking campaign saying the latter.
    2TheOtherDave12y
    Fair enough. For maximal precision I suppose I ought to have said "I reject your characterization of..." rather than "There's something really odd about characterizing...," but I felt some polite indirection was called for.
    0DaFranker12y
    Well, assuming the torture is artificially bounded to absolute impactlessness, then yes, it is irrelevant (in fact, it arguably doesn't even exist). However, a good rationalist utilitarian will retroactively consider future effects of the torture, supposing it is not so bounded, and once the fact of the torture can then be deduced, it does retroactively become a morally significant event in a timeless perspective, if I understand the theory properly.
    0thomblake12y
    The point was not necessarily to advocate torture. It's to take the math seriously. Just how many people do you expect to hear about the torture? Have you taken seriously how big a number 3^^^3 is? By how many utilons do you expect their disutility to exceed the disutility from the dust specks?
    2jacoblyles12y
    First, I don't buy the process of summing utilons across people as a valid one. Lots of philosophers have objected to it. This is a bullet-biting club, and I get that. I'm just not biting those bullets. I don't think 400 years of criticism of Utilitarianism can be solved by biting all the bullets. And in Eliezer's recent writings, it appears he is beginning to understand this. Which is great. It is reducing the odds he becomes a moral monster. Second, I value things other than maximizing utilons. I got the impression that Eliezer/Less Wrong agreed with me on that from the Complex Values post and posts about the evils of paperclip maximizers. So great evils are qualitatively different to me from small evils, even small evils done to a great number of people! I get what you're trying to do here. You're trying to demonstrate that ordinary people are innumerate, and you all are getting a utility spike from imagining you're more rational than them by choosing the "right" (naive hyper-rational utilitarian-algebraist) answer. But I don't think it's that simple when we're talking about morality. If it were, the philosophical project that's lasted 2500 years would finally be over!
    1thomblake12y
    You were the one who claimed that the mental discomfort from hearing about torture would swamp the disutility from the dust specks - I assumed from that, that you thought they were commensurable. I thought it was odd that you thought they were commensurable but thought the math worked out in the opposite direction. I believe Eliezer's post was not so much directed at folks who disagree with utilitarianism - rather, it's supposed to be about taking the math seriously, for those who are. If you're not a utilitarian, you can freely regard it as another reductio. You don't have to be any sort of simple or naive utilitarian to encounter this problem. As long as goods are in any way commensurable, you need to actually do the math. And it's hard to make a case for a utilitarianism in which goods are not commensurable - in practice, we can spend money towards any sort of good, and we don't favor only spending money on the highest-order ones, so that strongly suggests commensurability.

    No. One of those actions, or something different, happens if I take no action. Assuming that neither the one person nor the 3^^^3 people have consented to allow me to harm them, I must choose the course of action by which I harm nobody, and the abstract force harms people.

    If you instead offer me the choice where I prevent the harm (and that the 3^^^3+1 people all consent to allow me to do so), then I choose to prevent the torture.

    My maximal expected utility is one in which there is a universe in which I have taken zero additional actions without the conse... (read more)

    How bad is the torture option?

    Let's say a human brain can have ten thoughts per second; or the rate of human awareness is ten perceptions per second. Fifty years of torture means just over one and a half billion tortured thoughts, or perceptions of torture.

    Let's say a human brain can distinguish twenty logarithmic degrees of discomfort, with the lowest being "no discomfort at all", the second-lowest being a dust speck, and the highest being torture. In other words, a single moment of torture is 2^19 = 524288 times worse than a dust speck; and a dust speck is the smallest discomfort possible. Let's call a unit of discomfort a "dol" (from Latin dolor).

    In other words, the torture option means 1.5 billion moments × 2^19 dols; whereas the dust-specks option means 3^^^3 moments × 1 dol.

    The assumptions going into this argument are the speed of human thought or perception, and the scale of human discomfort or pain. These are not accurately known today, but there must exist finite limits — humans do not think or perceive infinitely fast; and the worst unpleasantness we can experience is not infinitely bad. I have assumed a log scale for discomfort because we use log scal... (read more)

    2TheOtherDave12y
    I suspect that I would prefer the false memory of having been tortured for five minutes to the false memory of having been tortured for a year, assuming the memories are close replicas of what memories of the actual event would be like. I would relatedly prefer that someone else experience the former rather than the latter, even if I'm perfectly aware the memory is false. This suggests to me that whatever I'm doing to make my moral judgments that torture is bad, it's not just summing the number of perception-moments... there are an equal number of perception-moments in those two cases, after all. (Specifically, none at all.) That said, this line of thinking quickly runs aground on the "no knock-on effects" condition of the initial thought experiment.
    2fubarobfusco12y
    True — we need a term for moments of discomfort caused by contemplation, not just ones caused by perception. It seems to me, though, that your brain can only perceive a finite number of gradations of unpleasant contemplation, too. The memory of being tortured for five minutes, the memory of being tortured for a year, and the memory of having gotten a dust speck in your eye could occupy points on this scale of unpleasantness.
    4Elithrion11y
    Actually, from what I read about related research in "Thinking, Fast and Slow", it's not clear that you would (or that the difference would be as large as you might expect, at least). It seems that memories of pain depend largely on the most intense moment of pain and on the final moment of pain, not necessarily on duration. For example, in one experiment (I read the book a week ago and write from memory), subjects were asked to put their hand in a bowl of cold water (a painful experience) for two minutes, then they were asked to put their hands in cold water for two minutes, followed by the water being warmed gradually over another 5 minutes. (There were reasonable controls, obviously.) Then they were asked which experience to repeat. The majority chose experience two, even though intuitively it is strictly worse than experience one. Of course, you'd have to find the actual related paper(s), check how high the correlation/ignoring-duration effect is, check if there's significant inter-individual variation (whether maybe you're an unusual person who cares about duration), but, regardless, there are significant reasons to doubt your intuitions in this scenario.
    0MugaSofer11y
    ... huh. I wonder if we might actually value experiences this way?
    2Elithrion11y
    Daniel Kahneman suggests that we do. We remember thing imperfectly and optimize for the way we remember things. Wiki has a quick summary.

    I think I have to go with the dust specks. Tomorrow, all 3^^^3 of those people will have forgotten entirely about the speck of dust. It is an event nearly indistinguishable from thermal noise. People, all of them everywhere, get dust specks in their eyes just going about their daily lives with no ill effect.

    The torture actually hurts someone. And in a way that's rather non-recoverable. Recoverability plays a large part in my moral calculations.

    But there's a limit to how many times I can make that trade. 3^^^3 people is a LOT of people, and it doesn't take ... (read more)

    3DaFranker12y
    What you're doing there is positing a "qualitative threshold" of sorts where the anti-hedons from the dust specks cause absolutely zero disutility whatsoever. This can be an acceptable real-world evaluation within loaded subjective context. However, the problem states that the dust specks have non-zero disutility. This means that they do have some sort of predicted net negative impact somewhere. If that impact is merely to slow down the brain's visual recognition of one word by even 0.03 seconds, in a manner that is directly causal and where the dust speck would have avoided this delay, then over 3^^^3 people that is still more man-hours of work lost than the sum of all lifetimes of all humans on Earth to this day ever. If that is not a tragic loss much more dire than one person being tortured, I don't see what could be. And I'm obviously being generous there with that "0.03 seconds" estimate. Theoretically, all this accumulated lost time could mean the difference between the extinction or survival of the human race to a pan-galactic super-cataclysmic event, simply by way of throwing us off the particular course of planck-level-exactly-timed course of events that would have allowed us to find a way to survive just barely by a few (total, relatively absolute) seconds too close for comfort. That last is assuming the deciding agent has the superintelligence power to actually compute this. If calculating from unknown future causal utilities, and the expected utility of a dust speck is still negative non-zero, then it is simple abstraction of the above example and the rational choice is still simply the torture.

    If you ask me the slightly different question, where I choose between 50 years of torture applied to one man, or between 3^^^3 specks of dust falling one each into 3^^^3 people's eyes and also all humanity being destroyed, I will give a different answer. In particular, I will abstain, because my moral calculation would then favor the torture over the destruction of the human race, but I have a built-in failure mode where I refuse to torture someone even if I somehow think it is the right thing to do.

    But that is not the question I was asked. We could also have the man tortured for fifty years and then the human race gets wiped out BECAUSE the pan-galactic cataclysm favors civilizations who don't make the choice to torture people rather than face trivial inconveniences.

    Consider this alternate proposal:

    Hello Sir and/or Madam:

    I am trying to collect 3^^^3 signatures in order to prevent a man from being tortured for 50 years. Would you be willing to accept a single speck of dust into your eye towards this goal? Perhaps more? You may sign as many times as you are comfortable with. I eagerly await your response.

    Sincerely,

    rkyeun

    PS: Do you know any masochists who might enjoy 50 years of torture?

    BCC: 3^^^3-1 other people.

    6Eliezer Yudkowsky12y
    We did specify no long-term consequences - otherwise the argument instantly passes, just because at least 3^^7625597484986 people would certainly die in car accidents due to blinking. (3^^^3 is 3 to the power of that.)
    0DaFranker12y
    I admit the argument of long-term "side effects" like extinction of the human race was gratuitous on my part. I'm just intuitively convinced that such possibilities would count towards the expected disutility of the dust motes in a superintelligent perfect rationalist's calculations. They might even be the only reason there is any expected disutility at all, for all I know. Otherwise, my puny tall-monkey brain wiring has a hard time imagining how a micro-fractional anti-hedon would actually count for anything other than absolute zero expected utility in the calculations of any agent with imperfect knowledge.
    2TheOtherDave12y
    Sure. Admittedly, when there are 3^^^3 humans around, torturing me for fifty years is also such a negligible amount of suffering relative to the current lived human experience that it, too, has an expected cost that rounds to zero in the calculations of any agent with imperfect knowledge, unless they have some particular reason to care about me, which in that world is vanishingly unlikely.
    0DaFranker12y
    Heh. When put like that, my original post / arguments sure seem not to have been thought through as much as I thought I had. Now, rather than thinking the solution obvious, I'm leaning more towards the idea that this eventually reduces to the problem of building a good utility function, one that also assigns the right utility value to the expected utility calculated by other beings based on unknown (or known?) other utility functions that may or may not irrationally assign disproportionate disutility to respective hedon-values. Otherwise, it's rather obvious that a perfect superintelligence might find a way to make the tortured victim enjoy the torture and become enhanced by it, while also remaining a productive member of society during all fifty years of torture (or some other completely ideal solution we can't even remotely imagine) - though this might be in direct contradiction with the implicit premise of torture being inherently bad, depending on interpretation/definition/etc. EDIT: Which, upon reading up a bit more of the old comments on the issue, seems fairly close to the general consensus back in late 2007.
    9Kawoomba12y
    If you still use "^" to refer to Knuth's up-arrow notation, then 3^^^3 != 3^(3^^26). 3^^^3 = 3^^(3^^3) = 3^^(3^27) != 3^(3^^27)
    2Eliezer Yudkowsky11y
    Fixed.

    If asked independently whether or not I would take an eyeball speck in the eye to spare a stranger 50 years of torture, i would say "sure". I suspect most people would if asked independently. It should make no difference to each of those 3^^^3 dust speck victims that there are another (3^^^3)-1 people that would also take the dust speck if asked.

    It seems then that there are thresholds in human value. Human value might be better modeled by sureals than reals. In such a system we could represent the utility of 50 years of torture as -Ω and represe... (read more)

    1Ronny Fernandez12y
    Here's a suggestion: if someone going through a fate A, is incapable of noticing whether or not they're going through fate B, then fate A is infinitely worse than fate B.
    2Kindly12y
    That's a fairly manipulative way of asking you to make that decision, though. If I were asked whether or not I would take a hard punch in the arm to spare a stranger a broken bone, I would answer "sure", and I suspect most people would, as well. However, it is pretty much clear to me that 3^^^3 people getting punched is much much worse than one person breaking a bone.
    1fubarobfusco12y
    That rests on the assumption that each person only cares about their own dust speck and the possible torture victim. If people are allowed to care about the aggregate quantity of suffering, then this choice might represent an Abilene paradox.

    The other day, I got some dirt in my eye, and I thought "That selfish bastard, wouldn't go and get tortured and now we all have to put up with this s#@$".

    I don't see that it's necessary -- or possible, for that matter -- for me to assign dust specks and torture to a single, continuous utility function. On a scale of disutility that includes such events as "being horribly tortured," the disutility of a momentary irritation such as a dust speck in the eye has a value of precisely zero -- not 0.000...0001, but just plain 0, and of course, 0 x 3^^^3 = 0.

    Furthermore, I think the "minor irritations" scale on which dust specks fall might increase linearly with the time of exposure, and would c... (read more)

    0Kindly12y
    If dust specks have a value of 0, then what's the smallest amount of discomfort that has a nonzero value instead? Use that as your replacement dust speck. And of course, the disutility of torture certainly increases in nonlinear ways with time. The 3^^^3 is there to make up for that. 50 years of torture for one person is probably not as bad as 25 years of torture for a trillion people. This in turn is probably not as bad as 12.5 years of torture for a trillion trillion people (sorry my large number vocabulary is lacking). If we keep doing this (halving the torture length, multiplying the number of people by a trillion) then are we always going from bad to worse? And do we ever get to the point where each individual person tortured experiences about as much discomfort as our replacement dust speck?
    0mantis12y
    If dust specks have a value of 0, then what's the smallest amount of discomfort that has a nonzero value instead? I don't know exactly where I'd make the qualitative jump from the "discomfort" scale to the "pain" scale. There are so many different kinds of unpleasant stimuli, and it's difficult to compare them. For electric shock, say, there's probably a particular curve of voltage, amperage and duration below which the shock would qualify as discomfort, with a zero value on the pain scale, and above which it becomes pain (I'll even go so far as to say that for short periods of contact, the voltage and amperage values lies between those of a violet wand and those of a stun gun). For localized heat, I think it would have to be at least enough to cause a small first-degree burn; for localized cold, enough to cause the beginnings of frostbite (i.e. a few living cells lysed by the formation of ice crystals in their cytoplasm). For heat and cold over the whole body, it would have to be enough to overcome the body's natural thermostat, initiating hypothermia or heatstroke. It occurs to me that I've purposefully endured levels of discomfort I would probably regard as pain with a non-zero value on the torture scale if it was inflicted on me involuntarily, as a result of working out at the gym (which has an expected payoff in health and appearance, of course), and from wearing an IV for two 36-hour periods in a pharmacokinetic study for which I'd volunteered (it paid $500); I would certainly do so again, for the same inducements. Choice makes a big difference in our subjective experience of an unpleasant stimulus. 50 years of torture for one person is probably not as bad as 25 years of torture for a trillion people. Of course not; by the scale I posited above, 50 years for one person isn't even as bad as 25 years for two people. If we keep doing this (halving the torture length, multiplying the number of people by a trillion) then are we always going from bad to worse?
    0Kindly12y
    In other words, it follows that 1 person being tortured for 50 years is better than 3^^^3 people being tortured for a millisecond. You're well on your way to the dark side.
    0mantis12y
    I might have to bring it up to a minute or two before I'd give you that -- I perceive the exponential growth in disutility for extreme pain over time during the first few minutes/hours/days as very, very steep. Now, if we posit that the people involved are immortal, that would change the equation quite a bit, because fifty years isn't proportionally that much more than fifty seconds in a life that lasts for billions of years; but assuming the present human lifespan, fifty years is the bulk of a person's life. What duration of torture qualifies as a literal fate worse than (immediate) death, for a human with a life expectancy of eighty years? I'll posit that it's more than five years and less than fifty, but beyond that I wouldn't care to try to choose. Let's step away from outright torture and look at something different: solitary confinement. How long does a person have to be locked in a room against his or her will before it rises to a level that would have a non-zero disutility you could multiply by 3^^^3 to get a higher disutility than that of a single person (with a typical, present-day human lifespan) locked up that way for fifty years? I'm thinking, off the top of my head, that non-zero disutility on that scale would arise somewhere between 12 and 24 hours.
    2aspera11y
    The idea that the utility should be continuous is mathematically equivalent to the idea that an infinitesimal change on the discomfort/pain scale should give an infinitesimal change in utility. If you don't use that axiom to derive your utility funciton, you can have sharp jumps at arbitrary pain thresholds. That's perfectly OK - but then you have to choose where the jumps are.
    1shminux11y
    It could be worse than that: there might not be a way to choose the jumps consistently, say, to include different kinds of discomfort, some related to physical pain and others not (tickling? itching? anguish? ennui?)
    1mantis11y
    I think that's probably more practical than trying to make it continuous, considering that our nervous systems are incapable of perceiving infinitesimal changes.
    1aspera11y
    Yes, we are running on corrupted hardware at about 100 Hz, and I agree that defining broad categories to make first-cut decisions is necessary. But if we were designing a morality program for a super-intelligent AI, we would want to be as mathematically consistent as possible. As shminux implies, we can construct pathological situations that exploit the particular choice of discontinuities to yield unwanted or inconsistent results.
    -2[anonymous]12y
    If getting hit by a dust speck has u = 0, then air pressure great enough to crush you has u = 0.
    0mantis12y
    Nope, that doesn't follow; multiplication isn't the only possible operation that can be applied to this scale.

    Incidentally, I think that if you pick "dust specks," you're asserting that you would walk away from Omelas; if you pick torture, you're asserting that you wouldn't.

    0TheOtherDave12y
    The kind of person who chooses an individual suffering torture in order to spare a large enough number of other people lesser discomfort endorses Omelas. The kind of individual who doesn't not only walks away from Omelas, but wants it not to exist at all.
    1Kindly12y
    This is exactly what bothered me about the story, actually. You can choose to help the child and possibly doom Omelas, or you can choose not to, for whatever reason. But walking away doesn't solve the problem!
    0NancyLebovitz12y
    It certainly doesn't. However, it shows more moral perceptiveness than most people have.
    1TheOtherDave12y
    Well, it depends on the nature of the problem I've identified. If I endorse Omelas, but don't wish to partake of it myself, walking away solves that problem. (I endorse lots of relationships I don't want to participate in.)
    0Kindly12y
    That's not a moral objection, that's a personal preference.
    1TheOtherDave12y
    Yes, that's true. It's hard to have a moral objection to something I endorse.
    3mantis12y
    True. On reflection, it's patently obvious that the Less Wrong way to deal with Omelas is not to accept that the child's suffering is necessary to the city's welfare, and dedicate oneself to finding the third alternative. "Some of them understand why," so it's obviously possible to know what the connection is between the child and the city; knowing that, one can seek some other way of providing whatever factor the tormented child provides. That does mean allowing the suffering to go on until you find the solution, though -- if you free the child and ruin Omelas, it's likely too late at that point to achieve the goal of saving both.

    Bravo, Eliezer. Anyone who says the answer to this is obvious is either WAY smarter than I am, or isn't thinking through the implications.

    Suppose we want to define Utility as a function of pain/discomfort on the continuum of [dust speck, torture] and including the number of people afflicted. We can choose whatever desiderata we want (e.g. positive real valued, monotonic, commutative under addition).

    But what if we choose as one desideratum, "There is no number n large enough such that Utility(n dust specks) > Utility(50 yrs torture)." What doe... (read more)

    [-][anonymous]11y00

    To me, this experiment shows that absolute utilitarianism does not make a good society. Conversely, a decision between, say, person A getting $100 and person B getting $1 or both of them getting $2 shows absolute egalitarianism isn't satisfactory either (assuming simple transfers are banned). Perhaps the inevitable realization...is that some balance between them is possible, such as the weighted sum (sum indicating utilitarianism) with more weight applied to those who have less (this indicating egalitarianism) can provide such a balance?

    1BerryPick611y
    I don't see how you've arrived at that at all. Would you mind elaborating?
    0[anonymous]11y
    To choose torture rather than dust specks is the utilitarian option, maximizing the total sum of subjective utility. This, however, causes extreme pain to 1 person to merely make everyone else receive slightly less negligible inconvenience. Anyone who picks dust specks is agreeing that utilitarianism is not always right (in fact eliezer says in his follow-up of this that in doing so, one rejects a certain kind of utilitarianism). If you chose torture though, I can see why you'd feel otherwise.
    0BerryPick611y
    Where's your argument to the effect that absolute utilitarianism does not make a good society? Further, could you taboo "good society" while you're at it?
    1[anonymous]11y
    Right, I should have said "is not optimal" rather than "does not make a good society". My basic point being that if we agree that dust specks are best (which I admit we're not in unanimity about), we reject utilitarianism as an optimal allocation rule. I do not discredit it as a whole (i.e. utilitarianism still has some merit as a guideline), but if we reject it even once, "absolute utilitarianism" (the belief that it is always optimal) cannot hold.
    0BerryPick611y
    So your basic contention is: "If you agree that dust specks is the answer, you can't say that torture is the answer"? This sounds fairly obvious.
    1[anonymous]11y
    Heh, no I'm not saying that if X holds then ~X fails to hold, I expect that to also be the case, but that's not what I'm saying. I'm saying that we (those of us who chose dust specks) have chosen to reject utilitarianism and proposing an alternative, since we can't merely choose nonapples over apples.
    1BerryPick611y
    I had a feeling you weren't. :) Yes, that's accurate. If you take utilitarianism to its logical conclusion, you reach things like Torture in T v. DS problems. This conversation reminds me a lot of the excellent book "The Limits of Morality." I'd be curious as to why anyone would choose to reject utilitarianism on the basis of this thought experiment, though.
    1[anonymous]11y
    Then it seems we've reached an agreement, as the agreement theorem says we should. And yes, this is a thought experiment, it is unlikely that anyone will ever have to choose between such extremes (or that 3^^^3 people will ever exist, at once or even in total). However, whether real or not, if one rejects utilitarianism here, they can't simply say "Well it works in all real scenarios though". Eliezer could have just as easily mentioned a utility monster, but he felt like conveying the same thought experiment in a more original way.
    1BerryPick611y
    Right. I'm just unclear as to why people (not you specifically, I just meant it generally in my previous comment) interpret these kinds of stories as criticisms of utilitarianism. They are simply taking the axioms to their logical extremes, not offering arguments against accepting those axioms in the first place.
    0[anonymous]11y
    Ah, well if that's the point you're making then yes, you're indeed correct. Eliezer has by no means argued that utilitarianism is entirely wrong, just shown that its logical extreme is wrong (which may or may not have been his intention). If you're arguing that others are seeing this in a different way than we agreeably have, and have interpreted this article in a different way than is rational...well, you may also have a point there. It's not particularly surprising though, since there are dozens (perhaps hundreds) of ways to succumb to 1 or more fallacies and only 1 way to succumb to none.
    0ArisKatsaris11y
    First of all, I am for the torture - so are 22.1% of the people recently surveyed vs 36.8% who are for the dust specks -- the rest don't want to respond or are unsure. Secondly, the issue of small dispersed disutilities vs large concentrated ones is one we constantly encounter in the real world, and time after time society accepts that for the purpose of e.g. the convenience of driving, we can tolerate the unavoidable tradeoff of the occasional traffic accidents. Where we don't sacrifice every tiny little luxury just to gather resources to save a single extra life. If you had to break 7 billion legs to save a single man from being tortured, most people would not accept this tradeoff as acceptable. Once this logic is in place, all that remains is the scope insensitivity where people can't really intuit the vast size of 3^^^3.

    I would suggest the answer is fairly obviously that one person be horribly tortured for 50 years, on the grounds that the idea "there exists 3^^^3 people" is incomprehensible cosmic horror even before you add in the mote of dust.

    0ygert11y
    I am not so sure the existence of 3^^^3 people is a bad thing, but even granting that, assume that the 3^^^3 people exist regardless, and the two choices you have are: a) one of them is tortured for 50 years, or b) each and every one of them gets a mote of dust in the eye. In general, if you find an objection to the premises of a question that does not directly impact the "point" of the question, you should find a variant of the premises that removes that objection, and answer the variant of the question with that as the premise. See The Least Convenient Possible World.
    -1khriys11y
    Wait, does the original question simplify to: "[There exists 3^^^3 people] AND [of the set of all people there exists one that is tortured for 50 years OR of the set of all people, all get a mote of dust in the eye; which would you prefer]"? Because that would be quite different to: "[of the set of all people there exists one person who will be tortured for 50 years] OR [there exists 3^^^3 people AND each of them gets a mote of dust in the eye]; which would you prefer?" I answered the latter.
    1ArisKatsaris11y
    The point of the question was to ask us to judge between the disutility of many people dust specked and a single person tortured, not to place a value on whether 3^^^3 existences is itself a bad or a good thing. So, kinda of the former interpretation, except that the "3^^^3 people" part is merely the setting that enables the question, not really the point of the question... EDIT: Btw, since I'm an anti-specker, I tried to calculate an upper bound once, for number of specks... It ended up being about 1.2 * 10^20 dust specks
    -4khriys11y
    Surely the incomprehensibly large number is part of the point of the question, otherwise why not use the set of all existing people being dust specked? ~7 billion dustmoted vs. 1 tortured? 3^^^3 people is more sentient mass than could physically fit in our universe. Edit: Here's how I imagined that playing out: 3^^^3 people are brought into existence, displacing all the matter of the universe. Which, while still momentarily conscious, each gets a mote of this matter in their eye, causing minor discomfort. They then all immediately die, and in the following eternity their bodies and the remainder of the universe collapses to a single point.
    1ArisKatsaris11y
    Because 7 billion dust specks aren't enough. Obviously. The point of the question is an extremely large number of tiny disutilities compared to a single vast disutility. When you're imagining 3^^^3 deaths instead and the destruction of the universe, you're kinda missing the point.
    -2khriys11y
    What about 7 billion stubbed toes?
    1ArisKatsaris11y
    A few posts up, I've already linked to some calculations about various scenarios. You can look at them, if you are really genuinely interested - but why would you be? It's the principle of the thing that's interesting, not some inexact numbers one roughly calculates.

    I have been reading less wrong about 6 month but this is my first post. I'm not an expert but an interested amateur. I read this post about 3 weeks ago and thought it was a joke. After working through replies and following links, I get it is a serious question with serious consequences in the world today. I don’t think my comments duplicate others already in the thread so here goes…

    Let’s call this a “one blink” discomfort (it comes and goes in a blink) and let’s say that on average each person gets one every 10 minutes during their waking hours. In re... (read more)

    0BerryPick611y
    What makes you think that making the numbers bigger changes anything? Anyone who switches answers between the original question and yours is confused.
    0ekramer11y
    So you would be willing to keep sending more and more people to torture for trivial less discomfort for the majority for each person tortured. At what point would you say enough is enough?
    1BerryPick611y
    Once the positive consequences are outweighed by the negative consequences, obviously.
    3Kindly11y
    3^^^3 is very very very very large. If we're sending untold trillions of people to torture every year, out of 3^^^3 people total, that means that over the whole history of our universe, future and past, we have a vanishingly small chance of seeing a single person in our universe get taken away for torture. Two people is even more negligible. In the meantime, all discomforts up to the level of one mild cold get prevented for everyone. Heck, I'd be willing to absorb all the torture-probability our universe would receive for myself, just so I wouldn't have to suffer through the mild cold I'm having right now. I take a greater risk by walking down the stairs every day. Where do I sign up?
    3ekramer11y
    OK, so you would accept less than one person per universe to be tortured for 50 years for everyone to avoid occasional mild discomfort. But that doesn’t answer the question of how far you are willing to take this logic. We haven’t even began to touch serious discomfort like half the population getting menstrual cramps every month, let alone prolonged pain and suffering. Would you send one person per planet for torture? One person per city? One person per family? The end result of this game is that a significant minority of people are being tortured at any one time so the majority can live lives free of discomfort, pain and suffering. So is your acceptable ratio 1:1,000,000, 1:10?
    2Kindly11y
    I'm pretty sure that right now more than 1 in 1,000,000 people around the world (that is, around 7000 people total) are experiencing suffering at least as bad as the hypothetical torture. Taking that into account, a ratio of 1:1,000,000 would be a strict improvement. Faced with a choice like that, I might selfishly refuse due to the chance that I would be one of the unlucky few, whereas right now I am doing pretty well compared to most people. But I would like to be the sort of person that wouldn't refuse. (I'm also not convinced that a life completely free of discomfort, pain, and suffering is possible or desirable; however, this objection doesn't reach the heart of the matter, so I'm willing to ignore it for the sake of argument.) The decision would be more difficult once we get to a ratio which does not strictly dominate our current situation. The terrible unfairness of a world where you're either free of all discomfort or being horribly tortured bothers me; for this reason, I think I wouldn't make the trade for any ratio where the total amount of suffering is roughly comparable to the status quo. I would have to do some research to give you a precise number. But now we are very far off from the original problem of dust specks vs. torture, in which the number 3^^^3 is specifically chosen to be sufficiently large that if you have an acceptable exchange rate at all, 1 : 3^^^3 will be acceptable to you.
    0ekramer11y
    Don’t be bamboozled by big numbers, it is exactly the same problem: How far would you go to maximize pain in the minority in order to minimize it in the majority. As Eliezer argued so forcefully in the comments above, this problem exists on a continuum and if you want to break the chain at any point you have to justify why that point and not another. Your argument for 1:1,000,000 does not go far enough in minimising pain for the majority. One person cannot take the pain of 1,000,000 people without dying or at least becoming unconscious. I suspect the maximum “other people’s pain” a person could endure without losing conscious is broadly between 5 and 50, let’s say 25. So if you are willing to send one human being out of 3^^^3 people to be tortured for 50 years to remove a vanishingly small momentary discomfort for the majority, then you must also be willing to continually torture 1 in 25 people to eradicate all pain in the majority of the other 24. They are two ends of the same continuum, you cannot break the chain. Both instances are brutally unfair on the people tortured, but at least in the second instance the majority will lead better lives while in the first instance not a single person is aware they had one less blink of discomfort in their entire lifetime. So my question remains to the torturers, are you a monster for sending 1 in 25 people to be tortured?
    4Kindly11y
    When did we start talking about someone "taking the pain of other people"? This is news to me; it wasn't part of the argument before. This, I understand is the reason you're suggesting that I would torture 1 in 25 people. Well, I wouldn't torture 1 in 25 people. I have already stated that if the total amount of pain is conserved (there may be difficulties with measuring "total pain", but bear with me here) then I prefer it to be spread out evenly rather than piled onto one person. In the dust speck formulation, the 3^^^3 being dustspecked are, in aggregate, suffering much more than the one person being tortured. 3^^^3 is very large. For any continuum you could actually describe that ends in "torture 1 in X people so that the remainder live perfect lives", X will still be approximately 3^^^3. Possibly divided by some insignificant number like googolplex that can be writtten down in mere scientific notation. At no point did anyone accept your 1:25 proposal.

    I definitely think it is obvious what Eliezer is going for: 3^^^3 people getting dusk specks in their eyes being the favorable outcome. I understand his reasonig, but I'm not sure I agree with the simple Benthamite way of calculating utility. Popular among modern philosophers is preference utilitarianism, where the preferences of the people involved are what constitute utility. Now consider that each of those 3^^^3 people has a preference that people not be tortured. Assuming that the negative utility each individual computes for someone being tortured is ... (read more)

    This question reminds me of the dilemma posed to medical students. It went something like this;

    if the opportunity presented itself to secretly, with no chance of being caught, 'accidentally' kill a healthy patient who is seen as wasting their life (smoking, drinking, not exercising, lack of goals etc) in order to harvest his/her organs in order to save 5 other patients should you go ahead with it?

    From a utilitarian perspective, it makes perfect sense to commit the murder. The person who introduced me to the dilemma also presented the rationale for saying '... (read more)

    0Howdy11y
    I think this is an important thing to consider if we intend to make benevolent AI's that are harmonious with our own sense of morality
    -4MugaSofer11y
    Depends on whether we intend to use them as doctors or superintelligent gods, doesn't it?

    I used to think that the dust specks was the obvious answer. Then I realized that I was adding follow-on utility to torture (inability to do much else due to the pain) but not the dust specks (car crashes etc due to the distraction). It was also about then that I changed from two-boxing to one-boxing, and started thinking that wireheading wasn't so bad after all. Are opinions to these three usually correlated like this?

    -2MugaSofer11y
    Perhaps a better analogy would be dust specks that are only slightly distracting, detracting from whatever you were doing but not enough to cause you to make tangible mistakes, versus torturing somebody who's flying a plane at the time. In other words, this "follow-on utility" should be separated from opportunity costs, shouldn't it?

    I would suggest that torture has greater and greater disutility the larger the size of the society. So given a specific society of a specific size, the dust specks can never add up to more suffering than the torture; the greater the number of dust specs possible, the greater the disutility of the torture, and the torture will always add up to worse.

    If you're comparing societies of different size, it may be that the society with the dust specks has as much disutility as the society with the torture, but this is no longer a choice between dust specks and to... (read more)

    1shminux11y
    Any utility function runs into a repugnant conclusion of one type or another. I wonder if there is a theorem to this effect, following from transitivity + continuity. Yours is no exception. For example, in your case of disutility of torture growing larger with the size of the society, doesn't the disutility of dust specks grow both with the number of people subjected to it and the society's size? If not, how about the intermediate disutilities, that of a stabbed toe, a one-min long agony and up and up slowly until you get to the full-blown 50 years of torture? Where it this magic boundary between the society size-independent disutility of specks and the scaling up disutility of torture?
    0Jiro11y
    As I noted, I'm trying to compute my utility function from my preferences, not the other way around. So in response to that I'd refine the utility function a bit: My new utility function has two terms, the main term and an inequality term. While my original statement that torture has a term based on the size of the society is still true, it is true because increasing the size of the society and still torturing 1 person means more inequality. The extra term applies to the dust specks as well, but I don't think this is a problem. In the original problem, everyone gets a dust speck, so there's no inequality term. The torture does have an inequality term and ends up always worse than the dust specks. If you want to move towards intermediate values by increasing the main term and keeping the inequality term constant, thus increasing the dust specks to stubbed toes and the like, you'll eventually come to some point where it exceeds the torture. But at that point they won't be dust specks--instead you'll decide that, for instance, many people suffering 1 day of torture will be worse than one person suffering 50 years of torture. I can live with that result. If you want to move towards intermediate values by increasing the inequality term and keeping the main term constant, you would "clump up" the dust specks, so one person receives many dust specks worth of disutility. If you keep doing this, you might eventually exceed the torture as well--but again, at the point where you exceed the torture, you won't have dust specks any more, you'll have larger clumps and you'll say that many clumps (equivalent to 1 day of torture each, for instance) can exceed one person getting 50 years. Again, I can live with that result. If you want to move towards intermediate values by increasing the inequality term and not bothering to keep the population constant, adding more people (in a way that is otherwise neutral if you ignore the inequality term) would increase the disutility. I hav

    There are many ways of approaching this question, and one that I think is valuable and which I can't find any mention of on this page of comments is the desirist approach.

    Desirism is an ethical theory also sometimes called desire utilitarianism. The desirist approach has many details for which you can Google, but in general it is a form of consequentialism in which the relevant consequences are desire-satisfaction and desire-thwarting.

    Fifty years of torture satisfies none and thwarts virtually all desires, especially the most intense desires, for fifty ... (read more)

    2TheOtherDave11y
    Can you clarify your grounds for claiming that barely noticeable dust specks neither satisfy nor thwart any desires?
    0selylindi11y
    Ah, yeah, that could be a problematic assumption. The grounds for my claim was generalization from my own experience. I have no consciously accessible desires which are affected by barely noticeable dust specks.
    1TheOtherDave11y
    Fair enough. I don't know what desirism has to say about consciously inaccessible desires, but leaving that aside for now... can you name an event that would thwart the most negligable desire to which you do have conscious access?
    1selylindi11y
    I have a high tolerance for chaotic surroundings, but even so I occasionally experience a weak, fleeting desire to impose greater order on other people's belongings in my physical environment. It could be thwarted by an event like a fly buzzing around my head once, which though not painful at all would divert my attention long enough to ensure that the desire died without having been successfully acted on.
    1TheOtherDave11y
    OK. So, if we assume for simplicity that a fly-buzzing event is the smallest measurable desire-thwarting event a human can experience, you can substitute "fly-buzz" for "dust speck" everywhere it appears here and translate the question into a desirist ethical reference frame. The question in those terms becomes: is there some number of people, each of whom is experiencing a single fly-buzz, where the aggregated desire-thwarting caused by that aggregate event is worse than a much greater desire-thwarting event (e.g. the canonical 50 years of torture) experienced by one person? And if not, why not?
    0selylindi11y
    Well, Yes, but then as stated earlier I think desirism bites the bullet on "dust speck", too, given more dust specks. For a quick Fermi estimate, if I suppose that the fly-buzz-scenario takes about 5 seconds and is 1/1000th as strong (in some sense) as the desire not to be tortured for 5 seconds, then the number of people where the fly-buzz-scenarios outweight the torture is about a half trillion. Granted, for people who don't find desirism intuitive, this altered scenario changes nothing about the argument. I personally do find desirism intuitive, though unlikely to be a complete theory of ethics. So for me, given the dilemma between 50 years of torture of one individual and one dust-speck-in-eye or one fly-buzz-distraction for each of 3^^^3 people, I have a strong gut reaction of "Hell yes!" to preferring the specks and "Hell no!" to preferring the distractions.
    0TheOtherDave11y
    Ah. I think I misunderstood you initially, then. Thanks for the clarification.
    [-][anonymous]11y00

    Forgive me for posting on such an old topic, but I've spent the better part of the last few days thinking about this and had to get my thoughts together somewhere. But after some consideration, I must say that I side with the "speckers" as it were.

    Let us do away with "specks of dust" and "torture" notions in an attempt to avoid arguing the relative value one might place on either event (i.e. - "rounding to 0/infinity"), and instead focus on the real issue. Replace torture with "event A" as the single most h... (read more)

    [This comment is no longer endorsed by its author]Reply
    0[anonymous]11y
    To flip the question on its head: Would it be morally acceptable for an immeasurably large population of individuals to allow a single individual to be mercilessly tortured if it would spare the entire population some trivial inconvenience?
    0ArisKatsaris11y
    I think that example triggers our "not, it would be immoral" intuition, because an immoral population would make the choice against the trivial inconvenience with even greater ease. So, their saying "yes, do please allow some individual to be mercilessly tortured" functions as Bayesian evidence in support of their immorality. But if you had a large population of people decide between a trivial inconvenience for a different large population of people vs a single individual selected from their own midst to be mercilessly tortured, I'm guessing that the moral intuition would be the exact different, and it would feel immoral for this population to condemn a different large population to such an inconvenience just to benefit one of their own.
    0[anonymous]11y
    So you're saying it is potentially immoral if the group themselves decide to make the decision, but potentially moral if an outsider of the group makes the exact same decision?
    3ArisKatsaris11y
    No, I'm not saying that. Don't start with the ill-defined concept of "moral" and "immoral" -- start from the undisputed reality of the matter that people pass moral judgements on actions they hear about. So I'm saying that when Alice hears of X: group A choosing to sacrifice one of their own rather than inconvenience group B Alice is likely to pass a different moral judgement of that choice than if Alice hears of Y: group A choosing to sacrifice a member of group B rather than inconvenience themselves. Even though utilitarianism would argue that actions X and Y are equally moral taken by themselves, actions X and Y provide different evidence about whether group A is really acting on moral principles. So if the evolutionary purpose for our moral intuitions is to e.g. identify people as villains or not, action Y triggers our moral intuitions negatively and action X triggers our moral intuitions positively. Because at a deeper level the real purpose of judging the deed is to judge the doer.
    1[anonymous]10y
    So after a lot of thought, and about 5 months spent reading articles on this site, I think I can see the big picture a little more clearly now. Imagine having a really large collection of grains of sand that are all suspended in the air in the shape of a flat disk. Imagine too, that it takes energy to move any single grain in collection upwards or downwards, but once a grain is moved, it stays put unless moved again. Just conceptually let grains of sand represent people and grain movement upwards/downwards represent utility/disutility. What Eliezer is arguing is that, assuming it takes the same amount of energy to move each individual grain of sand, then clearly it takes far less energy to move a single grain of sand very far downward than to move every grain of sand just slightly downward. What I initially objected to, and what I was trying to intuit through in my first post, is that perhaps it is the case that the energy required to move a single grain of sand is not constant. Perhaps it increases with distance from the disk. I still hold to this objection. Even if so, it is certainly a valid conclusion to draw that moving a single grain far enough downwards requires less energy than moving every grain slightly downwards. Increasing the number of grains of sand certainly affects this. And no matter what the growth factor may be on the nonlinear amount of energy required to move a single grain very far from its starting point, it is still finite. And you can add enough grains of sand so that the multiplicative factor of moving everything slightly downwards dwarfs the nonlinear growth factor. Thus, given enough people (and I do stress, enough people), it may be morally worse to subject them all to having a single dust speck enter their eye for a brief moment than to subject a single individual to torture for 50 years. Its just that our intuition says that for any scale our minds are even close to capable of reasoning about, exponential/super-exponential functi
    0[anonymous]10y

    3^^^3 people? ...

    I can see what point you were trying to make....I think.

    But I happen to have a significant distrust of classic utilitarianism: if you sum up the happiness of a society with a finite chance of lasting forever, and subtract the sum of all the pain...you get infinity-infinity, which is conditionally convergent. the simplest patch is to insert a very, very, VERY tiny factor, reducing the weight of future societal happiness in your computation... Any attempt to translate to so many people ...places my intuition in charge of setting the summa... (read more)

    It seems to me that preference utilitarianism neatly reconciles the general intuitive view against torture with a mathematical utilitarian position. If a proportion p of those 3^^^3 people have a moral compunction against people being tortured, and the remainder are indifferent to torture but have a very slight preference against dust specks, then as long as p is not very small, the overall preference would be for dust specks (and if p was very small, then the moral intuitions of humanity in general have completely changed and we shouldn't be in a position to make any decisions anyway). Is there something I'm missing?

    0TheOtherDave11y
    I'm not sure I'm understanding your reasoning here. It seems like you're simply thinking about people's preferences for a dust speck in the eye, relative to their preferences for torture, without reference to how many dust specks and how much torture... is that right? If so, that doesn't seem to capture the general intuitive view. Intuitively, I strongly prefer losing a finger to losing an arm, but I prefer 1 person losing an arm to a million people losing a finger. (Or, put differently, I prefer a one-in-a-million chance of losing my arm to the certainty of losing a finger.) Quantity seems to matter.

    "...Some may think these trifling matters not worth minding or relating; but when they consider that tho' dust blown into the eyes of a single person, or into a single shop on a windy day, is but of small importance, yet the great number of the instances in a populous city, and its frequent repetitions give it weight and consequence, perhaps they will not censure very severely those who bestow some attention to affairs of this seemingly low nature. Human felicity is produc'd not so much by great pieces of good fortune that seldom happen, as by little advantages that occur every day."

    --Benjamin Franklin

    Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

    I would prefer that 3^^^3 people get dust specs in their eyes, because that means that we either figured out how to escape the death of our universe, or expand past our observable universe. [/cheating]

    2Sniffnoy10y
    s/cheating/EDT/ :)

    I have mixed feelings on this question. On the one hand, I agree that scope insensitivity should be avoided, and utility should count linearly over organisms. But at the same time, I'm not really sure the dust specks are even ... bad. If I could press a button to eliminate dust specks from the world, then (ignoring instrumental considerations, which would obviously dominate) I'm not sure whether I would bother.

    Maybe I'm not imagining the dust specks as being painful, whereas Eliezer had in mind more of a splinter that is slightly painful. Or we can imagine... (read more)

    1TheOtherDave10y
    There's nothing important about the dust-specks here; they were chosen as a concrete illustration of the smallest unit of disutility. If thinking about dust specks in particular doesn't work for you (you're not alone in this), I recommend picking a different illustration and substituting as you read.
    [-][anonymous]10y00

    Would it change anything if the subjets where extremely cute puppies?

    [This comment is no longer endorsed by its author]Reply

    Would it change anything if the subjects were extremely cute puppies with eyes so wide and innocent that even the hardest lumberjack would soon?

    [-][anonymous]9y00

    If the dust specks could cause deaths I would refuse to chose either. If I somehow still did, I would pick the dusts anyhow because I know that I myself would rather die by accident caused of a dust particle than be tortured for even ten years.

    1Jiro9y
    Would you also refuse to drive because there is some non-zero chance that you'll hit someone and cause them to suffer torturous pain?
    0[anonymous]9y
    No I would not. I am not sure what you are getting at, but my point is that the torture was a fact and the speck dusts were extremely low probabilites, scattered over a big population. (Besides, I don´t think it is possible for me to cause torturous pain to someone only by driving.)

    "The Lord Pilot shouted, fist held high and triumphant: "To live, and occasionally be unhappy!"" (three worlds collide) dust specks are just dust specks - in a way its helpful to sometimes have these things.

    But the thing changes if you don't distribute the dust specks 1 per person but 10 per second per person?

    1Quill_McGee9y
    In the Least Convenient Possible World of this hypothetical, each and every dust speck causes a small constant amount of harm, with no knock-on effects(no increasing one's appreciation of the moments when one does not have dust in ones eye, no preventing a 'boring painless existence,' nothing of the sort). Now it may be argued whether this would occur with actual dust, but that is not really the question at hand. Dust was just chosen as being a 'seemingly trivial bad thing.' and if you prefer some other trivial bad thing, just replace that in the problem and the question remains the same.

    I think I've seen some other comments bring it up, but I'll say it again. I think people who go for the torture are working off a model of linear discomfort addition, in which case the badness of the torture would have to be as bad as 3^^^3 dust particles in the eye to justify taking the dust. However, I'd argue that it's not linear. Two specs of dust is worse than twice as bad as one spec. 3^^^3 people getting specs in their eyes is unimaginably less bad than one person getting 3^^^3 specs (a ridiculous image considering that's throwing universes into a dude's eye). So the spec very well may be less than 1/(3^^^3) as bad as torture.

    Even so, I doubt it. So purely utilitarian probably does suggest torture the one guy.

    0Jiro8y
    It has to be more than just not linear for that to solve it, it has to be so nonlinear that no finite number of specks at all can add up to the torture, since otherwise we could just ask the same question using the new number instead of 3^^^3. If it's so nonlinear that no finite number of specks can add up to torture, then you find the maximum amount that a finite number of specks can add up to. Then there are two amounts (one slightly more than that and one slightly less) where one amount cannot be balanced by dust particles and one amount can, which doesn't really make any sense.

    I think the problem here is the way the utility function is chosen. Utilitarianism is essentially a formalization of reward signals in our heads. It is a heuristic way of quantifying what we expect a healthy human (one that can raise up and survive in a typical human environment and has an accurate model of reality) to want. All of this only converges roughly to a common utility because we have evolved to have the same needs which are necessarily pro-life and pro-social (since otherwise our species wouldn't be present today).

    Utilitarianism crudely abstract... (read more)

    I think the reason people are hesitant to choose the dust speck option is that they view the number 3^^^3 as being insurmountable. It's a combo chain that unleashes a seemingly infinite amount of points in the "Bad events I have personally caused" category on their scoreboard. And I get that. If the torture option is a thousand bad points, and the dust speck is 1/1000th of a point for each person, than the math clearly states that torture is the better option.

    But the thing is that you unleash that combo chain every day.

    Everytime you burn a piece... (read more)

    0Good_Burning_Plastic8y
    But you're also potentially causing a mild benefit to a hypothetically infinitely higher number of people than 3^^^3.
    2gjm8y
    Perhaps that is how some people who prefer TORTURE to DUST SPECKS are thinking, but I see no reason to think it's all of them, and I am pretty sure some of them have better reasons than the rather strawmanny one you are proposing. For instance, consider the following: * Which would you prefer: one person tortured for 50 years or a trillion people tortured for 50 years minus one microsecond? * I guess you prefer the first. So do I. * Which would you prefer: a trillion people tortured for 50 years minus one microsecond, or a trillion trillion people tortured for 50 years minus two microseconds? * I guess you prefer the first. So do I. * ... Now repeat this until we get to ... * Which would you prefer: N/10^12 people (note: N is very very large, but also vastly smaller than 3^^^3) tortured for one day plus one microsecond, or N people tortured for one day? * I am pretty sure you prefer the first option in every case up to here. Now perhaps microseconds are too large, so let's adjust a little: * Which would you prefer: N people tortured for one day, or 10^12*N people tortured for one day minus one nanosecond? * ... and continue iterating -- I bet you prefer the first option in every case -- until we get to ... * Which would you prefer, M/10^12 people tortured for ten seconds plus one nanosecond, or M people tortured for ten seconds? (M is much much larger than N -- but still vastly smaller than 3^^^3.) * I am pretty sure you still prefer the first case every time. Now, once the times get much shorter than this it may be difficult to say whether something is really torture exactly, so let's start adjusting the severity as well. Let's first of all replace the rather ill-defined "torture" with something less ambiguous. * Which would you prefer, M people tortured for ten seconds, or 10^12*M people tortured for 9 seconds and then kicked really hard on the kneecap but definitely not hard enough to cause permanent damage? * The intention is that the t
    1Tetraspace8y
    New situation: 3^^^3 people being tortured for 50 years, or one person getting tortured for 50 years and getting a single speck of dust in their eye. By do unto others, I should, of course, torture the innumerably vast number of people, since I'd rather be tortured for 50 years than be tortured for 50 years and get dust in my eye.

    To me it is immediately obvious that torture is preferable. Judging my the comments I'm in the minority.

    How can I avoid being the one your AI machine chooses to be tortured?

    I think the best counterargument I came against this line of reasoning turns around the fact that there might not be 3^^^3 moral beings in mindspace

    There might not be 3^^^3 moral beings in mindspace, and instantiating someone more than once might not create additional value. So there's probably something here to consider. I still would choose torture with my current model of the world, but I'm still confused about that point.

    Fun Fact: the vast majority of those 3^^^3 people would have to be duplicates of each other because that many unique people could not possibly exist.

    The answer is obvious once you do the math.  

    I think most people read the statement above like it reads either torture one person alot or torture alot of people very little.  That is not what it says at all, because 3^^^3 or 3^7625597484987 is more like the idea of infinity than the idea of alot.  

    If you were divide up those 3^^^3 dust particles and send them through the eyes of anything with eyes since the dawn of time, it would be no minor irritant.  You wouldn't be just blinding everything ever.  Nor is it just like sandblasting... (read more)

    I've taught my philosophy students that "obvious" is a red flag in rational discourse.

    It often functions as "I am not giving a logical or empirical argument here, and am trying to convince you that none is needed" (Really, why?) and "If you disagree with me, you should maybe be concerned about being stupid or ignorant for not seeing something obvious; a disagreement with my unfounded claim needs careful reasoning and arguments on your part, it may be better to be quiet, lest you are laughed at." It so often functions as a trick to get people to overlook an... (read more)

    I drop the number into a numbers-to-words converter and get "seven trillion six hundred twenty-five billion five hundred ninety-seven million four hundred eighty-four thousand nine hundred eighty-seven". (I don't do it by hand, because a script that somebody tested is likely to make fewer errors than me). Google says there are roughly 7 billion people on earth at the moment. Does that mean that each person gets roughly 1089 dust specks, or that everyone who's born gets one dust speck until the 7 trillion and change speck quota has been met? I ask because i... (read more)

    I'm fairly certain Elizer ended with "the choice is obvious" to spark discussion, and not because it's actually obvious, but let me go ahead and justify that - this is not an obvious choice, even though there is a clear, correct answer (torture).

    There are a few very natural intuitions that we have to analyze and dispel in order to get off the dust specks.

    1.) The negative utility of a dust speck rounds down to 0. 

    If that's the case, 3^^^3*0 = 0, and the torture is worse. The issue with this is two fold. 

    First, why does it have to be torture on the... (read more)

    Gee, tough choice. We either spread the suffering around so that it’s not too intense for anyone or we scapegoat a single unlucky person into oblivion. I think you’d have to be a psychopath to torture someone just because “numbers go BRRR.”

    The answer is obvious, and it is SPECKS.
    I would not pay one cent to stop 3^^^3 individuals from getting it into their eyes.

    Both answers assume this is a all-else-equal question. That is, we're comparing two kinds of pain against one another. (If we're trying to figure out what the consequences would be if the experiment happened in real life - for instance, how many will get a dust speck in their eye when driving a car - the answer is obviously different.)

    I'm not sure what my ultimate reason is for picking SPECKS. I don't believe there are any ethical theo... (read more)

    Strongly disagree.

     

    Utilitarianism did not fall from a well of truth, nor was it derived from perfect rationality.

     

    It is an attempt by humans, fallible humans, to clarify and spell out pre-existing, grounding ethical belief, and then turn this clarification into very simple arithmetic. All this arithmetic rests on the attempt to codify the actual ethics, and then see whether we got them right. Ideally, we would end up in a scenario that reproduces our ethical intuitions, but more precisely and quickly, where you look at the result and go “yes, tha... (read more)

    Well, 3^^^3 dust specks in people's eyes imply that order of magnitude of people existing, which is an... interesting world, and sounds good news on its own. While 3^^^3 dust specks in the same person's eyes imply that they and the whole Earth get obliterated by a relativistic sand jet that promptly collapses into a black hole, so yeah.

    But way-too-literal interpretations aside, I would say this argument is why I don't think total sum utilitarianism is any good. I'd rather pick a version like "imagine you're born as a random sentient in this universe, would... (read more)

    First, I wanted to suggest a revision to 3^^^3 people getting duct in their eyes: Everyone alive today and everyone who is ever born henceforth, as well as all their pets, will get the speck.  That just makes it easier to conceive.

    In any case, I would choose the speck simply on behalf of the rando who would otherwise get torture. I'd want to let everyone know that I had to choose and so we all get a remarkably minor annoyance in order to avoid "one person" (assuming no one can know who it will be) getting tortured.  This would happen only if there were a strong motivation to stop.  The best option is not presented: collect more information.

    4Ericf1mo
    Note that you have reduced the raw quantity of dust specks by "a lot" with that framing. Heat death of universe is in "only" 10^106 years, so that would be no more than 2^ (10^(106)) people (if we somehow double every year) compared to 3||3^(27), which is 3^ (10^ (a number too big to write down))
    [+][comment deleted]1y00
    [+][comment deleted]3mo10