Discussion about this post

User's avatar
Overcoming Bias Commenter's avatar

There's a central question that I think it is useful to address directly to some extent for this discussion: what is morality? What does it mean for a choice to be "immoral"? This is a question of definition really, so the answer is somewhat subjective.

I think "moral" and "immoral" are complex terms in themselves: almost(?) anyone could increase their net benefit to others, if they made personal sacrifices to do so, as most people do not have as their foremost goal to maximize their benefit to others at all times. Is it "immoral" not to hold altruism as the goal of highest priority in all cases? It is merely less moral than the highest possible degree of morality. I think it is usually more useful to speak of morality of choices relative to their alternatives, than by using vaguely-defined thresholds for a "moral" choice versus an "immoral" choice - language which I think can lead to a lot of confusion.

Morality, to me, refers to how one's actions affect one's expectation for another's expected utility. That is, in a simple world with only Alice and Bob, it is immoral for Alice to act in a way that she would expect to reduce (Bob's expected utility, according to Bob). The morality of Alice's choice X can be described thus:

Ma: Morality for AliceX: a certain optionPa: Alice's perceptionUb: Bob's expected utilityUba: Bob's expected utility, as Alice would predict it

Ma(X) = Uba(X) = Pa(Ub(X))

Though this quantitative view is a simple model that provides a way of analysing decisions from a moral standpoint, it still does not make answering moral question easy: the difficult part is counting not just Alice's expectation of the effect of a decision on Bob, but on everyone affected (including Alice) - a decision's overall morality is clearly determined by its effects on everyone affected.

It would seem that summing one's expected effect on all people involved is the proper way to determine overall morality, as such a system has the desirable property that a decision that affects ten bystanders in a certain way is morally equivalent to a smaller decision that affects each of the ten people individually, in the same way.

In order to perform such a summation, it is necessary to normalize other people's expected utilities - or equivalently, to measure them all in absolute, interchangeable units of effect. While this is impossible to do perfectly, I think it is quite possible to approximate.

Note that this morality function is not affected by benefit or harm to the decision-maker. This may seem like a flaw; is it not more moral for Alice to harm Bob slightly if she must do so to receive some great benefit, than it is for Alice to do the same harm for very little benefit to herself? No, I think it is not; this kind of decision generally exposes the value Alice places on morality: in the former situation, Alice may make the decision even if she places fairly a high price on morality, while the latter situation suggests that Alice is willing to take immoral action for a lower price - the morality of the choice is not affected by such context, though what it may reveal about Alice's morality is (and the two are easily confused).

One possible objection to this system would be the idea that this would mean that it is morally neutral to help Hitler achieve his goals at the expense of hindering Gandhi, each to such an extent that their expected utilities are affected by the same (but opposite) amount. Not so! This quantitative approach requires taking into account all foreseeable effects, including indirect effects... which is part of the reason moral questions aren't easy.

So I agree that moral questions aren't easy, but I don't think it's because it's difficult to reason about morality abstractly; in my opinion the difficulty is mostly in forecasting the effects of one's decisions, which is a problem squarely in the realm of decision theory - and thus the methods of doing so are well studied.

Expand full comment
Overcoming Bias Commenter's avatar

I would put Paul's question another way. If, as many people (including myself) believe that what we call moral systems are actually rules of thumb generated by genetic machinery that evolved to ensure group survival in the early days of humanity, then we should not expect to be able to do any moral calculus. The moral rules of thumb we have were not designed to be consistent or "right" in any fundamental sense, they are there only to ensure that the group worked together, rules that didn't work didn't survive. As a result, in todays modern world, where we are faced with many different situations that our ancestors were not faced with, it is to be expected that we will encounter many moral paradoxes. If genetic rules of thumbs are really where morality originates it will be especially easy (trivial) to generate hypothetical examples of moral paradoxes, or difficult moral calculations, as we have seen on this blog.

On the question of whether decision theory is useful in moral calculations, I would say there is an analogy with economics, where the utility question is simplified to be optimisation of a particular chosen variable, usually money. Once you have decided to optimise that variable, then a calculation becomes possible. But economics has no say in whether one variable should be optimised over another one.

Expand full comment
26 more comments...

No posts