Who Told You Moral Questions Would be Easy?

In addition to (allegedly) scope insensitivity and "motivated continuation," I would like to suggest that the incredibly active discussion on the torture vs. specks post is also driven in part by a bias toward, well, toward closure; a bias toward determinate answers: a bias toward decision procedures that are supposed to yield an answer in every case, and one that can be implemented by humans in the world in which we live and with the biological and social pressures that we face.

That’s the wonderful thing about the kinds of utilitarian intuitions that tell us, deep in our brains, that we can aggregate a lot of pain and pleasure of different kinds among different people and come up with some kind of scalar representing the net sum of "utility" to be compared to some other scalar for some other pattern of events in some possible world; the scalars to be compared to determine which world is morally better, and to which world our efforts should be directed. Those intuitions always generate a rationalizable answer.

If we demand that our moral questions have answers of that type, comments like Eliezer’s start to look very appealing. Eliezer says that it’s irrational to "impose and rationalize comfortable moral absolutes in defiance of expected utility." But if that’s so, Eliezer owes us an argument for why moral judgments make sense in terms of expected utility. Or why they make sense in terms of any decision theoretic calculation at all. Or why they have to make sense in terms of any overall algorithmic procedure of any kind. To simply assume that decision theory applies to moral questions, that there’s something — utility, goodness, moral worth, whatever — to maximize is to beg the question right from the start. And that’s bias if anything is.

One can easily argue that making decision theoretic arguments about moral questions is a massive category error. On that view, it’s not irrational to blink away the dust speck even if you believe that no quantity of dispersed dust equals even a moment of torture, and even if there’s some action you could take instead with a 1/3^^^3 chance of saving someone from torture. (We don’t need to mess around with hacks about hyperreal numbers and the like.) Likewise, it’s not irrational to spend 3^^^3 dollars to save a single human life, or to decline to spend that money, because, yup, the sort of rationality that compares values in this way just doesn’t apply to moral questions.

But broader than that, there simply might not be a decision procedure at all. I think the sort of people who are drawn to the sorts of Bentham-by-pocket-calculator reasoning that we see here are revealing a serious discomfort with the idea that there might be moral questions that are not easily resolved by some kind of algorithm. Or that might not be easily resolved period, or resolved at all. There might be moral paradoxes. There might be irreconcilable moral conflicts. Normative truth might not follow the same laws that descriptive truth follows.

There’s a classic example in moral philosophy, thanks to Sartre. A young man in Vichy France has to choose between caring for his mother and fighting for the Resistance. Those who drink the decision theoretic kool-aid are committed to the notion that there’s some kind of calculation that’s possible in principle to determine which duty has a higher value. But it’s really hard to swallow that claim. Many people feel very strongly, when confronted by that example, that the poor young man is blamable for any decision he makes, for any decision must neglect some duty. He has had bad moral luck.

Incidentally, that example also shows that anyone who thinks moral absolutes are "comfortable" is seriously misinformed. Consider torture again. It’s hardly comfortable to stick to the morally absolute position that one can’t (in the classic hypothetical case) torture a terrorist to find out the location of the nuclear bomb in New York City. That’s a hard position. Anyone taking that position suffers, internally and externally, and suffers a lot. It’s a lot easier to fall into "expected utility" and justify the torture. So please don’t insult people who accept deontological moral theories by suggesting that they’re (we’re) hiding in a "comfortable" position.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Long ago, I believed that morality came from outside me, like a great light in the sky, as Terry Pratchett put it. I didn’t believe in God, but I believed in morality. If there was no morality, why, that whole case had utility equal to zero, by assumption, so those possibilities cancelled out of the equation – no point in betting on them.

    Then I considered, really considered for the first time, the case where I knew for absolute certain that there was no objective morality – which to me meant no morality at all, no “rational” decision. And it came to me that, even so, I would still choose to save people’s lives.

    Then I realized that was morality.

    So I’d have to ask how you choose, Paul, in the cases where there is no rational or moral answer.

  • Paul Gowder

    Eliezer: one thing I greatly respect about you is your ability to always frame your arguments, even those with which I furiously disagree, in an incredibly powerful fashion.

    So I believe that I owe you an answer, but I’m not sure I can give you one. Perhaps the most compelling thought I have on this matter (lifted pretty much entirely from Bernard Williams) is that those are the situations when some quality called character reveals itself. That the person who chooses to fight in the Resistance and the person who chooses to take care of his mother are both people with moral worth, but are very different people, and the difference between them comes down to that character. And how do people make these decisions? Well, they just do. The decisions are inside them. There’s a gap between the rationally calculable and the decisions that are actually made, and people make a leap (sort of an atheistic version of Kierkegaard’s leap to faith) from the end of the rational to the decision.

    How would I do it? Personally? I don’t know. I don’t believe I’ve personally confronted a situation like that. Were I in the position of Sartre’s young man, I’d agonize over the choice, and I like to think I’d do something, but I can’t honestly say I know which I’d choose or why.

  • http://www.satisfice.com/blog James Bach

    Thanks Paul, for the thoughtful post. I quite agree that there’s a big fat category error being committed.

    If we are going to respect mathematics, we have to respect logic, don’t we? And logical reasoning relies on premises. Premises rely on… extra-rational choices. This is so well established in modern philosophy, and the alternative ideas (such as moral realism) so thoroughly smashed to infitesimal bits, that I’m sometimes startled when moral realism is brought into a conversation.

    I would give the following answers Eliezer’s question about how to decide in situations where there is no moral or rational method:

    1. I may choose in accordance with the palpable intuition produced in me by two billion years of evolution. As a human, I am not some conceptual semiote. I am a physical being that is structured to supply certain responses.

    2. I may choose in accordance with a story I inhabit. Perhaps I’m feeling part of an egalitarian story today, or perhaps I feel like a robber baron. This varies based on what I’ve been up to, lately.

    3. I may choose in accordance with a prior agreement or arrangement.

    4. I may choose arbitrarily.

    5. I may choose in accordance with a will to learn. Which option leads to more learning?

    6. I may refuse to make a choice.

    7. I may begin to make a choice, and only as I’m making it realize it’s implications. This is called enactment.

    8. I may choose according to the rules and goals of a game I wish to win.

    I’m sure there are many others. Each of them is easily justified to my own satisfaction. There’s no general answer to the question “which is the right way or best way to choose?” but there may be a specific answer in a specific situation.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    I thought morality was in essence about which choices we do and should make, and that decision theory is about how we do and should make choices. So how can decision theory not be relevant to morality?

  • http://pdf23ds.net pdf23ds

    Robin, it seems like Paul is trying to get away from the view that morality is just about the choices we make in certain situations. Rather, it’s more of a collection of prototypical situations (say, caring for family, and serving your country) with different intrinsic moral value, but where the value of the different situations aren’t comparable. Of course, Paul may feel free to correct me.

    I don’t really understand this view I outlined. It seems that at some point you have to be able to compare the value of different actions. Otherwise every moral choice we face would be a dilemma. Should I help family or slightly-more-extended family? Should I give the homeless person five or ten dollars? These are different choices, but on the same spectrum, and directly comparable. (Or would you disagree, Paul?) So then we have to draw the line between comparable situations and incomparable ones. Where do you draw that line? I don’t think it can be done, so I reject the view I outlined.

  • josh

    “Then I realized that was morality.”

    but it’s not necessarily objective, at least across individuals.

  • Jef Allbright

    A category error is being committed to the extent that moral evaluation is seen in scalar, rather than vector terms. But a much more serious error is committed by those who take the inherent subjectivity of moral evaluation to mean that this precludes quantifiable methods of rational decision theory.

  • http://michaelkenny.blogspot.com Mike Kenny

    When your preferences are simple and the situation is simple, the answer it seems should probably be simple. Say you only care about money, and you have two job offers, one for 100,000 a year and another for 200,000 a year–all other things are equal about the jobs, then you’d go with the 200,000 dollar a year job.

    Now imagine you’re in an art gallery and you have to pick only one painting to view for the rest of your life, what painting would you choose and why? What you choose is obviously based on taste, but what are your tastes? Do you like pictures with a lot of people in them? A lot of the color blue? That are large in scale? Symbolic? Realistic? What equation could you generate that would give you a good answer here? I could see psychologists in the long run creating good models that would predict what you would choose, but for now, the choosing seems most efficiently done not by reducing paintings value down to one factor (number of people in the painting? number of sqare inches the size of the canvas is?) but rather which one is most in line with your tastes generally.

    Look at the Sarte example of caring for your mother or helping the resistance as the option between two pieces of art–which piece of art is most appealing to you? The experience of helping mom or the resistance?

    Does my account seem reaonsable? (gallery example partially inspired by some of the writing in Tyler Cowen’s Discover Your Inner Economist)

  • Paul Gowder

    Lots of hard questions here. Briefly:

    Robin: Decision theory is about how we make choices which are quantifiable in some fashion (even if it’s only as ordinal preference rankings over states of affairs weighted by probabilities).

    pdf: But not all moral questions are the same. Where we’re comparing states of affairs that are described along the same dimensions, and there are no overriding duties, sure, it might make sense to use a utilitarian calculation. (Utilitarianism for the easy questions, deontology or virtue ethics for the hard ones? Why not?)

    Interestingly, note that we have problems even when we’re being asked to bring about morally significant states of affairs on the same dimension. I recommend Judith Jarvis Thompson. 1985. “The Trolley Problem.” _Yale Law Journal_ 94:1395-1415.

    Jeff: But I’m not arguing that uncalculability comes from subjectivity. While my eventual choice between the resistance or the duty to mother will have to be subjective (that’s why it’s an expression of character), the problem itself need not have anything inherently subjective about it. Sartre’s problem is difficult because it expresses duties that most of us feel are — universally, objectively — absolute (care for mother, fight the nazis), and places them in conflict.

    Mike: that sounds much like the idea of character I’m trying to tap.

  • Jef Allbright

    “It seems that at some point you have to be able to compare the value of different actions. Otherwise every moral choice we face would be a dilemma.”

    Rarely, if ever, does a physical system incur the computational expense of such evaluation of expected outcomes, and then only when outcomes can be reasonably expected to be reliably specified. In a complex and evolving environment, it is generally good enough, and better (more efficient and adaptable) in the bigger picture, to act merely to minimize the difference between perceived reality and one’s internal model of a desired state. Wash, rinse, and repeat indefinitely. Wisdom is encoded in the depth and precision of the principles defining the feedback loop; not in attempting to quantify that which is computationally intractible.

  • Bob

    This has been a fascinating discussion. My instinct was specks. But I truly don’t see how anyone can stick with specks under what seems to be the most obvious interpretation of the original question. Forget specks – take the question as choosing between enormous, but finite, harm to one person and very small, but nonzero, harm to an arbitrarily large number of people. Take the limit to infinity. How can you choose finite harm over infinite harm? How can harm not be summed? How could it be scaled so the sequence converges? Does the second person losing an arm not count or only count one-half as much as the first (and the third one-quarter, etc.)? I understand wanting to seek more information (but on a hypothetical?). I understand wanting to deny the need to make a choice, to look for a loophole. I understand denying the morality of both decisions. But I don’t understand choosing harm that approaches infinity over finite harm just because the finite harm is concentrated. If I say “specks” rather than 50 years of torture, I’d have to choose specks over 49 years. And 48 years… This implies that near infinite harm spread thinly is better than one second of torture for one individual.

    Again, this has been a fantastic discussion.

  • Paul Gowder

    Bob: What about denying the coherence of your single dimension of “harm,” and saying that the specks and the torture just aren’t the same thing, and aren’t reducible to the same thing (“harm”)? It’s not so much comparing apples and oranges as comparing aircraft carriers and the color green.

  • Jef Allbright

    “But I’m not arguing that uncalculability comes from subjectivity. While my eventual choice between the resistance or the duty to mother will have to be subjective (that’s why it’s an expression of character), the problem itself need not have anything inherently subjective about it.”

    Moral decision-making always entails two fundamental elements: the agent’s (1) model of a subjectively desired state, and (2) model of (increasingly) objective principles of effective interaction with ‘reality’. Each is crucial to mensuration of matters of morality. Each is differently difficult.

    As for the Trolley Problem, it doesn’t show that morality is mysterious, but only that the popular conception of morality is flawed. Humans’ evaluation of moral issues is encoded at various levels, from innate drives including disgust or pride, to cultural drives expressed by societal/religious norms, to logical reasoning about maximizing expected utility. It’s not surprising that differing representations of the problem yield differing solutions, nor that we feel so strongly about questions of what is “right.”

    Each of these models has in common that they encode, not “what is right” in any objective way, but rather evolutionary “wisdom” of “what works” in principle to further the values of an increasing context of agents over an increasing scope of consequences. We, being a result of these same evolutionary processes, therefore tend to see our branch of “what increasingly works” as isomorphic with “increasingly right.”

    The practical significance of this is that it takes us a step beyond shallow utilitarianism, expanding our intentional focus from perceived desired consequences in an inherently uncertain future, to *improving the process* that delivers increasingly desirable consequences via increasing effective awareness of elements 1 and 2 above.

  • Bob

    Paul, but the original question was very carefully posed as different degrees of harm. I share your desire to elevate torture as a different sort of bad. I just can’t see how to do it in a consistent way without changing the question. And I think that your (really “our”) desire to change the question is one of the more interesting things to come out of the discussion.

  • Nick Tarleton

    Paul: How, then, do you decide between certain specks and uncertain harm? Would you rather speck 3^^^3 people with probability 1 or torture one person with probability 1/3^^^3?

  • Jef Allbright

    “But I truly don’t see how anyone can stick with specks under what seems to be the most obvious interpretation of the original question.”

    The problem here is analogous with the “paradox” of the iterated Prisoners’ Dilemma. If the consequences can be completely specified then the problem is one of classical rational decision-making. But in the real-world, any agent is constrained by a point of view that is both (1) located, and (2) limited. For this reason, our best approach to this interesting and realistic class of problems is to implement solutions based on best known principles (and wait to see the actual complex outcome.)

    It’s like being faced with the problem of a huge chasm that separates one from trading partners on the other side. Does one assume that a bridge is needed, and proceed to allocate physical and intellectual resources to building the best possible bridge? Or does one focus on defining the problem in terms of (1) one’s values and (2) best known principles of promoting those values? In the latter case, the process of defining and refining the problem actually leads to defining the solution, which may turn out to be not a bridge per se but perhaps a more highly adapted system of communication and delivery of goods.

    Well, I’ve reached my self-imposed quota for the day, despite this being an interesting topic and one that is crucial to optimizing humanity’s path toward an increasingly complex future.

  • Mike

    Nick:
    Paul: How, then, do you decide between certain specks and uncertain harm? Would you rather speck 3^^^3 people with probability 1 or torture one person with probability 1/3^^^3?
    As I mentioned in the other thread, that question and Eliezer’s question are profoundly different. In Eliezer’s example, you are torturing someone with 100% certainty, at the cost of a trivial harm. (My answer to Eliezer’s question is essentially that (trivial harm)*(3^^^3 people)=trivial harm). In you’re example, you are choosing a trivial harm against against a trivial (basically=0) probability that someone will get tortured.

    3^^^3 people simply cannot be equated with a 1/3^^^3 probability, which, if intelligible at all, is equal to 0 with greater certainty by far than anything else we are “most certain” about in the real world.

  • Paul Gowder

    Bob: Ok, I’ll go along with that. Let’s abstract from torture and specks and just say “ouch.” Poke 5000 people with one thumbtack each, or poke 1 person 4999 times. I think it’s still reasonable to believe that tiny dispersed ouch just doesn’t add up to one big ouch. It just doesn’t aggregate like that. At some point there’s a “magic poke” where you’re not just having ouchies, you’re being brutalized. And no, I can’t say where the number is, but if morality isn’t decision theoretic, must I do so?

    Nick: To be consistent, I’d have to go with the specking in your case too. But is consistency required? (My intuition about that example is “who cares,” because the torture is so uncertain that it can be discounted to zero as long as we don’t repeat it.)

  • Nick Tarleton

    Poke 5000 people with one thumbtack each, or poke 1 person 4999 times. I think it’s still reasonable to believe that tiny dispersed ouch just doesn’t add up to one big ouch.

    I agree, but not because they’re somehow incommensurable; rather, suffering is linear with single pokes across multiple people but greater than linear when the same person is poked more than once, because of emotional effects on top of the raw pain from the pokes. I would indeed say there is some N between 1 and 5000 where N pokes to 1 person goes from better to worse than 1 poke to 5000 people, but I don’t see it as the “magic poke”; it’s just the point where a certain increasing nonlinear function crosses a certain constant value. In practice, we don’t and can’t know the function or the value, so we generally have to decide on principle, and this principle may include discrete “ouchie” and “brutalization” categories; but that doesn’t mean they’re not both “just” suffering.

  • Nick Tarleton

    Mike, I wasn’t claiming equivalence between my question and Eliezer’s, only trying to make the point that it’s harder to judge under uncertainty if you have incommensurable values.

    If 3^^^3 people actually existed, then a 1/3^^^3 probability would not be trivial in absolute terms – for instance, torturing each existing person with probability 1/3^^^3 would be very bad, while it’s ridiculously trivial in our world.

  • rcriii

    Bob, I have to chime in here an say that the original question was _not_ carefully posed:

    – Very early in the thread someone noted that 3^^^3 is much greater than the likely number of humans ever. So we are asked to decide between something that has happened in the past and an impossibility.
    – Elezier did not put any kind of value on the harm caused by the ‘speck’. Is it 50/1^^^3 of 1 year of torture, more or less? How can we do any kind of ‘calculus’ if we don’t know?
    – How do we account for the fact that a speck once washed or brushed out on the eye is quickly forgotten, but torture leaves physical and psychological scars that can last a lifetime? Or are these memorable specks?

  • Bob

    Paul/Nick,

    I’m still struggling with how tiny dispersed ouches don’t *eventually* add up to one big ouch. The only way is if they represent zero harm. Personally, this was my first line of defense for my specks intuition but I decided that the question precluded the harm from actually being zero. In reality, I believe that there is a threshold of suffering below which there is no harm. Consider, perhaps, the level of harm you would be willing to suffer for a 1/N probability (N very large but not too large) that you would save a stranger’s life. If you willingly accept harm that has no benefit to you and almost no expected benefit to anyone, can we really call it harm? The specks seem like they would clearly meet this test.

    Of course, to rcriii’s post, I’m not trying to make this a realistic question. And assuming nonzero harm, it has to aggregate somehow. Nick raises an interesting idea but what would make us believe that the disutility of harm is nonlinear enough to avoid the problem as the number of tiny harms goes to infinity?

  • http://www.existenceiswonderful.com AnneC

    Paul, thanks for this.

    My first thought was “specks” and after further internal deliberation and reading through the comments, I find myself at the same conclusion. The attempt to compare “specks” with “torture” in this manner is incoherent. It’s probably reasonable to assume that practically everyone gets a speck of dust in one or both of their eyes on a daily basis. Dust exists pretty much everywhere humans exist, and the probability of spending your day anywhere outside an industrial cleanroom *without* having dust contact your ocular region is likely very low.

    So it’s not as if our present position is one of zero knowledge regarding the effect of dust specks in our eyes — this is something that happens in the real world, every day, and honestly, it’s not something that garners a lot of complaint. Torture, however, is superlatively awful *by design*. And I personally would rather live in a world where dust specks were commonplace, but nobody was being tortured, than the reverse.

    How a person answers this question probably depends somewhat on how literally they take it. I immediately considered a situation consisting of a comparison between actual dust specks and actual torture, and my analysis was informed by that literalness. However, it seems that some respondents interpreted the question in a more purely abstract sense — e.g., rather than applying their ethical sense to the real world, they approached the problem as one of getting the “right” answer from a consciously algorithmic standpoint.

    I am not getting why some seem tempted to invoke the viewpoint of an entity capable of somehow “feeling” the cumulative effect of tiny amounts of possible badness. If no such entity exists to experience this aggregate suffering, then it doesn’t make sense to suggest that imagining an aggregate suffering made up of a zillion teensy annoyances is the best way to reason out one’s moral decisions.

    Also, the fact that different people are responding to this dilemma in different ways (and with different reasoning paths) hopefully demonstrates that there’s more than one way to consider such questions, and that no, there’s not likely to be a One True Algorithm that somehow allows a person to make “proper” moral decisions quickly and tidily in every case. A memorized “abstraction machine” cannot be used as a substitute for actual thinking, of the sort that engages directly with the relevant, real, data at hand in a given situation. Sure, abstraction machines can help a person structure their thoughts in some cases, but I firmly believe that people shouldn’t let their devotion to keeping their “isms” harmonious override their devotion to the well-being of individuals.

    With regard to the original question again, I think the point might have better been served by simply asking whether torturing one person or torturing a whole lot of people was worse. (And I’m sure answers to this would vary as well — personally I see torturing anyone as just as bad as torturing everyone, but that’s beside the point right now).

    The “dust specks” example was so inane as to be functionally meaningless in discussion — you might as well have said, “What would be worse, torturing one person for 50 years, or giving 3^^^3 people a stuffed toy unicorn?” Given that stuffed toy unicorns are bigger than dust specks, and potentially more dangerous (e.g., people could trip over them!), it would seem that there’s a greater Existential Unicorn Risk than an Existential Dust Speck Risk. But I’d still choose “Unicorns for All” world over “Torture One Guy” world any day.

  • Nick Tarleton

    And I’m sure answers to this would vary as well — personally I see torturing anyone as just as bad as torturing everyone, but that’s beside the point right now.

    If you save one person from torture, but there are other people being tortured that you can’t do anything about, have you not done something good? I don’t think this is beside the point – part of the original dispute is to what extent suffering is additive.

  • Bob

    AnneC,

    RE Nick’s comment, think about the implications of your claim. It’s actually worse – if everyone is being tortured and you can save all of them but a single person, that is not an improvement?

  • http://www.existenceiswonderful.com AnneC

    Bob said: RE Nick’s comment, think about the implications of your claim. It’s actually worse – if everyone is being tortured and you can save all of them but a single person, that is not an improvement?

    It’s not an improvement for the person who is still being tortured.

    If you’ve managed to save some people from being tortured, you have indeed done a good thing for those specific people.

    You haven’t “done a good thing” in the abstract, utilitarian sense of “maximizing the greater good” from some imaginary omniscient perspective, because that perspective doesn’t exist.

    I do think we should try to save as many people from torture as possible, and I realize that in the practical sense, it might not be possible to save everyone all the time. But that doesn’t make the “one guy is being tortured” situation morally acceptable, or more morally acceptable than the “everyone is being tortured” situation.

    Both situations are equally unacceptable, because in each case, there is torture (which, again, is superlatively awful by design) being experienced from an individual perspective.

    Perhaps if you’re the tortured individual, and you’re aware that everyone else has been saved, you might feel better from an empathic standpoint. But it seems doubtful that torturers would want to tell their victim anything that might make him feel better.

    I guess what I’m trying to say is, acknowledging the practical limitations of one’s rescue tactics doesn’t equate to having achieved moral acceptability or superiority. I also think that these hypothetical zero-sum-game dilemmas are unrealistic. In real life, the choice is not usually really between “saving 1 person vs. saving 10 people” — and I think that people who train themselves to think in “zero-sum terms” risk stifling their creative faculties for dealing with difficult situations.

  • ChrisA

    I would put Paul’s question another way. If, as many people (including myself) believe that what we call moral systems are actually rules of thumb generated by genetic machinery that evolved to ensure group survival in the early days of humanity, then we should not expect to be able to do any moral calculus. The moral rules of thumb we have were not designed to be consistent or “right” in any fundamental sense, they are there only to ensure that the group worked together, rules that didn’t work didn’t survive. As a result, in todays modern world, where we are faced with many different situations that our ancestors were not faced with, it is to be expected that we will encounter many moral paradoxes. If genetic rules of thumbs are really where morality originates it will be especially easy (trivial) to generate hypothetical examples of moral paradoxes, or difficult moral calculations, as we have seen on this blog.

    On the question of whether decision theory is useful in moral calculations, I would say there is an analogy with economics, where the utility question is simplified to be optimisation of a particular chosen variable, usually money. Once you have decided to optimise that variable, then a calculation becomes possible. But economics has no say in whether one variable should be optimised over another one.

  • Pingback: Overcoming Bias : Knowing your argumentative limitations, OR “one [rationalist's] modus ponens is another’s modus tollens.”

  • kaz

    There’s a central question that I think it is useful to address directly to some extent for this discussion: what is morality? What does it mean for a choice to be “immoral”? This is a question of definition really, so the answer is somewhat subjective.

    I think “moral” and “immoral” are complex terms in themselves: almost(?) anyone could increase their net benefit to others, if they made personal sacrifices to do so, as most people do not have as their foremost goal to maximize their benefit to others at all times. Is it “immoral” not to hold altruism as the goal of highest priority in all cases? It is merely less moral than the highest possible degree of morality. I think it is usually more useful to speak of morality of choices relative to their alternatives, than by using vaguely-defined thresholds for a “moral” choice versus an “immoral” choice – language which I think can lead to a lot of confusion.

    Morality, to me, refers to how one’s actions affect one’s expectation for another’s expected utility. That is, in a simple world with only Alice and Bob, it is immoral for Alice to act in a way that she would expect to reduce (Bob’s expected utility, according to Bob). The morality of Alice’s choice X can be described thus:

    Ma: Morality for Alice
    X: a certain option
    Pa: Alice’s perception
    Ub: Bob’s expected utility
    Uba: Bob’s expected utility, as Alice would predict it

    Ma(X) = Uba(X) = Pa(Ub(X))

    Though this quantitative view is a simple model that provides a way of analysing decisions from a moral standpoint, it still does not make answering moral question easy: the difficult part is counting not just Alice’s expectation of the effect of a decision on Bob, but on everyone affected (including Alice) – a decision’s overall morality is clearly determined by its effects on everyone affected.

    It would seem that summing one’s expected effect on all people involved is the proper way to determine overall morality, as such a system has the desirable property that a decision that affects ten bystanders in a certain way is morally equivalent to a smaller decision that affects each of the ten people individually, in the same way.

    In order to perform such a summation, it is necessary to normalize other people’s expected utilities – or equivalently, to measure them all in absolute, interchangeable units of effect. While this is impossible to do perfectly, I think it is quite possible to approximate.

    Note that this morality function is not affected by benefit or harm to the decision-maker. This may seem like a flaw; is it not more moral for Alice to harm Bob slightly if she must do so to receive some great benefit, than it is for Alice to do the same harm for very little benefit to herself? No, I think it is not; this kind of decision generally exposes the value Alice places on morality: in the former situation, Alice may make the decision even if she places fairly a high price on morality, while the latter situation suggests that Alice is willing to take immoral action for a lower price – the morality of the choice is not affected by such context, though what it may reveal about Alice’s morality is (and the two are easily confused).

    One possible objection to this system would be the idea that this would mean that it is morally neutral to help Hitler achieve his goals at the expense of hindering Gandhi, each to such an extent that their expected utilities are affected by the same (but opposite) amount. Not so! This quantitative approach requires taking into account all foreseeable effects, including indirect effects… which is part of the reason moral questions aren’t easy.

    So I agree that moral questions aren’t easy, but I don’t think it’s because it’s difficult to reason about morality abstractly; in my opinion the difficulty is mostly in forecasting the effects of one’s decisions, which is a problem squarely in the realm of decision theory – and thus the methods of doing so are well studied.