Minimal Morality

(Inspired by my conversation with Will Wilkinson.)

In a typical moral philosophy paper, an author proposes a principle to summarize his specific intuitions about some relatively narrow range of situations.  For example, he might propose a principle to account for his intuitions about variations on a scenario wherein passerbys learn one or more folks are downing in a lake.  This practice makes sense if such intuitions are very reliable, but much less sense if intuitions are very unreliable, as clues about moral truth.

In the ordinary practice of fitting a curve to a set of data points, the more noise one expects in the data, the simpler a curve one fits to that data.  Similarly, when fitting moral principles to the data of our moral intuitions, the more noise we expect in those intuitions, the simpler a set of principles we should use to fit those intuitions.  (This paper elaborates.)

The fact that our moral intuitions depend greatly on how situations are framed, differ greatly across individuals within a culture, and vary greatly across cultures, suggests lots of noise in our moral intuitions.  The fact that moral philosophers don’t much trust the intuitions of non-moral-philosophers shows they agree error rates are high.  So I wonder: what moral beliefs should we hold in the limit of expecting very large errors in our moral intuitions?

It seems to me that in this situation we should rely most on the simplest most consistent pattern we can find in our case-specific moral intuitions.  And it seems to me that this simplest pattern is  just our default rule, i.e., what we think folks should do in the usual case where no other special considerations apply.  Which is simply: usually it is fine to do what you want, to get what you want, [added: if no one else cares.]

If you dropped your pencil and want to get it back, well then do reach down and pick it up.  If you have been eating your meal steadily and are feeling a little full, then do slow your bites down a bit, if that seems more agreeable.  If you are reading an magazine and the current article starts to bore you, why skip to next article if you guess that would bore you less.  If you have an itch and no one will know or care if you scratch, well then scratch.

Millions of such examples can be multiplied, all fitting well with the simple pattern: [added: all else equal,] it is usually good for people to do things to get what they want.  So this seems to me the natural limit of minimal morality: trust this basic pattern only, and not any subtler corrections.  This basically picks a goodness measure close to preference utilitarianism, which is pretty close to the economist’s usual efficiency criterion.

As we back off from this minimal morality, and start to consider trusting more details about our moral intuitions, because of a lower estimate of our moral intuition error rate, what more would we add?  We might consider incorporating basic rules like “don’t kill” or “don’t lie,” but a funny pattern emerges with these.  We do not think we should apply these rules in many situations, usually situations where following these rules would prevent many people from getting other things they want.

And if asked why these are good rules, people usually explain how following them will tend to get people the other sorts of things they want.   For example, they’ll note that since the gains of liars are usually less than the costs to those who believe their lies, on average we are better off without most lies.

Yes, people do clearly often look disapprovingly on other people doing things to get what they want, and they often attribute this disapproval to non-default moral intuitions, i.e., intuitions that go beyond just wanting to get folks what they want. But we can just look at this as a situation of onlookers wanting different behavior from the disapproved folks.  And so we can want to discourage such disapproved behavior just in order to get these onlookers what they want.

This all suggests that the minimal morality pattern, of just getting people what they want, plausibly fits a large fraction of the recommended actions of our non-default moral intuitions.  Which isn’t to say that it accounts for each exact moral intuition in every particular situation; clearly it does not.  But this does suggest that as we turn down the parameter which is the estimated error rate in our moral intuitions, we have to go quite a ways before our best fit moral beliefs will be forced to deviate much  from the simple minimal morality of just getting people what they want.

Since it seems to me that moral intuition error rates are pretty high, this is good enough for me; I’ll just take the efficiency criteria and run with it.  I’m not saying I’m sure that true morality exactly agrees with this; I’m just saying I don’t trust the available data enough to estimate anything much different from this simplest most consistent pattern in our moral intuitions.

Added: To be more precise, for most situations where someone makes a choice that no one else cares about, the usual moral intuition is that the better outcomes are the ones that person wants more.   The simple pattern I see in this is that outcome goodness is increasing in how much each person wants that outcome.  Economic efficiency then follows by the usual arguments of Pareto improvements.

See also my clarifying post from the next day.

GD Star Rating
Tagged as:
Trackback URL:
  • Grant

    I’m not saying I’m sure that true morality exactly agrees with this; I’m just saying I don’t trust the available data enough to estimate anything much different from this simplest most consistent pattern in our moral intuitions.

    It seems odd to be so skeptical about moral intuitions, while being so confident about the estimates of peoples’ wants. The moral intuition “killing outside of self-defense is bad” seems a lot more likely to be accurate to me than interpersonal utility comparisons. If our moral intuitions have a high error rate, what tools do we have to estimate and compare people’s wants? Maybe I’m already too much of a consequentialist to understand, but it seems like we are drawing from the same toolkit when we try to estimate utility and estimate the effects of moral rules.

    Are you sure you aren’t downplaying the contributions of moral philosophers (which I’m gathering are somewhat at odds with economists) in favor of your own in-group (which seem to count on the ability to compare interpersonal utility)?

  • Grant, to estimate folks’ wants, we can look at all the things they do. That gives us a huge amount of data to work with. In contrast, moral intuitions are only available in special circumstances. Moral philosophers don’t trust ordinary people’s moral intuitions much; they say you have to be trained like they have been in order for your intuitions to count for much in their analysis. (I just added a sentence about this in the second paragraph.)

  • Peter Twieg

    So you’re basically proposing using the moral equivalent of an f-test to see if additional explanatory variables of our intuitions should be added to our model?

    I think this is a useful metaphor (if it’s accurate), but I wouldn’t be surprised if you found people who argued that embracing the entire set of standard liberal values did offer a lot of marginal explanatory power without bringing in too much conflict (is there a “right” level of how damning these conflicts of fundamental values should be?) I’d rather see preference utilitarianism (or other forms of consequentialism) defended on the grounds that oftentimes are moral intuitions are ill-considered or influenced by social/political pressures in indefensible ways.

  • Bob Unwin

    “This all suggests that the minimal morality pattern, of just getting people what they want, plausibly fits a large fraction of the recommended actions of our non-default moral intuitions.”

    There are famous cases in which preference utilitarianism gives results which people find abhorrent. (Cutting up organs, Repugnant Conclusions, etc.). There are also cases where it seems to place implausible demands on individuals: e.g. to give up personal projects (art, academia, family) and devote all resources to helping those who can be helped most efficiently.

    People have thought about this range of cases a lot and so (given your framework) one might take them to show that something more than preference utilitarianism is needed. Instead of adding to preference utilitarianism one might also try something like rule utiliarianism (see Stanford encyclopedia article).

  • Telnar

    I think that most of the evidence that you’re using is of very low value in proving your conclusion.

    Let’s say that a simple moral algorithm starts with maximizing your personal utility as if you were the only person on the planet and then adds an interaction term for the effects of your actions on others. For now, it’s not important whether that interaction term is driven by possible future benefits or harms to be received from others (whether directly or by acquiring status) or whether it’s based on concern for the welfare of others.

    In your examples, you start from situations where that interaction term is known to be zero since no one else cares about your actions. You then use that to infer that a utilitarian interaction term is better because the first term is utilitarian.

    I don’t see any evidence that the best way for an agent to perceive and act on his own desires is at all similar to the best way to perceive and act on the much less visible desires of others.

  • Peter, yes, something like an f-test. The more often our intuitions are indefensibly biased, the less reasonably one can embrace a whole complex package of them, especially when others disagree strongly.

    Bob, some people find those results abhorrent; I and others do not. The level of the demand seems to me pretty independent of what outcomes might be demanded.

    Telnar, the efficiency version of preference utilitarianism has no interaction term. It is just give people what they want, full stop.

  • Douglas Knight

    Taking a crude outside view, matching only a simple pattern, I must ask: why do you dismiss the highly specific moral philosophy papers when you usually endorse the academic practice of many small innovations?

  • Norman Maynard

    If the goal is to identify true morality as closely as possible, it seems fitting a very simple model to the data of personal moral intuitions only works if there is no systemic bias in the estimates. I’m not at all convinced this is the case, and several major moral traditions assert that human desires will in fact exhibit a bias away from true morality.

    But given that some people expect a bias and others do not, what is the justification for taking unbiasedness of moral intuition as the null as opposed to ‘some bias exists,’ which would seem to be the less restricted of the two? And what is the equivalent of the Hausman test in this case?

  • Telnar

    Robin, the distinction I’m making is that our intuition in most everyday situations doesn’t say that we want to maximize the sum of everyone’s preferences. It says that we want to maximize our personal preferences. Generalizing from that individual view to aggregated preference utilitarianism which calls for maximizing the sum of individual preferences is what creates that interaction term. From an individual perspective, It’s equal to the change in others’ utility caused by that individual’s actions. There is nothing in our intuition which directly supports that term.

  • Kevin

    Robin, it still seems to me that unless you already have a moral theory you won’t be able to identify an error level needed to pick a principle that fits it. If you’re a particularist, your error level is going to be dramatically lower than if you’re a methodist in moral theory. And utilitarians tend to be dramatically more revisionary that Kantians or Aristotelians – the motivations for the theories contribute to the expected error level.

    So I still don’t think you can pick a principle by selecting an error level alone because the moral theory you have will be the main guide towards determining the error level. And again, evolution doesn’t tell us anything about our error levels unless we already have an expectation of error set by a prior moral theory of at least some sort.

  • 1. Not sure I see the point of fitting in the limit of huge noise. The point of fitting data to a curve is to predict and extrapolate; the larger the noise, the lousier the extrapolation; in the limit of very large noise any model, simple or otherwise, has very low predictive power.

    2. Your argument seems to assume that universal features are a more important feature of morality than particular features. If one thinks of ethics as more like aesthetics than like science, this doesn’t work. All Victorian novels were printed on paper; this is, however, the least interesting thing about them.

    3. It seems at least arguable that moral systems are “good” or “bad” depending the extent that they yield a rich and integrated picture of life, and that the correct objects to look at are the links and interactions between precepts, rather than precepts themselves.

    4. Do you have any data to indicate that _the_ simplest and most consistent pattern is what you identify, or is that entirely speculative?

  • Hi Robin, you seem to make two non-sequiturs here:

    (1) “usually it is fine to do what you want… [i.e.] it is usually good for people do things to get what they want.”

    Note that something could be permissible (“fine”) without it being good. One test is to ask whether the contrary option would have been “not fine”. But that may be too strict. You may simply want to start from the premise that it’s better for people to get what they want, no matter the fact that it’s also “fine” for them to refrain from doing so.

    (2) “This basically picks something close to preference utilitarianism

    Careful. We started off with cases of agent-relative norms: S should do whatever S herself wants. How, exactly, do you propose to move from this to the agent-neutral norm that one ought to promote everyone‘s preferences? Certainly some argument is required; for on the face of it, the logic of your argument (at least as stated) should lead you to egoism, not utilitarianism.

  • Pingback: Interessantes woanders (2009.05.30) › Immersion I/O()

  • Richard, I added to the post, hoping to clarify.

    Norman, bias is error; unreliability is from the expectation that there is error.

    Kevin, you can’t reduce your intuition error just by endorsing some grand viewpoint.

    Sarang; if you know another comparably simple pattern, do tell.

  • mjgeddes

    Your attempt to justify ‘tuning out’ everything other than efficiency is merely a poor trick to try to get the conclusion you want. No doubt you are so enarmored of this (very Yudkowskyian) viewpoint that the whole basis of morality is giving people what they want, because it fits your economists perspective and ties in neatly with Bayes and decision theory.

    But lets look more closely at both (a) Economic efficiency and (b) Bayes.

    (a) Utilitarianism is limited because it is only looking at functional (external) behavior. In terms of economic efficiency, there’s no difference between a non-sentient robot performing a service, and (for example) a conscious human performing the same service; in purely economic terms the value of the service is the same. This should be indicating the limitations of such a viewpoint.

    (b) If morality and intelligence are different, and Bayes deals only with intelligence, what on Earth makes you think it can deal with morality as well? Bayes itself can only deal with external decisions (decisions about behavioral courses of action). Not decisions about internal thoughts.

    As you yourself point out, moral reasoning seems to be much more case-based, and what type of reasoning is perfectly suited to this? Why…analogy formation of course!

    The limitations of Bayes are quickly exposed in moral reasoning… general moral conclusions need to specify all relevant features of a situation in every possible case. But if you look at how humans actually deal with morality, its all story telling (narrative), metaphor and analogy formation (which are all tied to specific contexts) because only analogy formation can handle the case-based reasoning required.

    I maintain that aesthetic sensibilities are the true basis of morality/values, not giving people what they want. Giving people what they want is merely a special case of aesthetic sensibilities. Aesthetic sensibilities are closely tied with analogy formation, since, (as I mentioned), humans use story telling/narrative (metaphor/analogy) extensively in moral reasoning. You yourself have agreed that stories are used to indicate who to blame/praise.

  • I find mjgeddes’ self-aggrandizing hand-waving on analogy trumping Bayes to be annoying, but I think he’s actually correct, descriptively speaking, about how people deal with morality & aesthetics. As I mentioned in your prior post, I’m a full-blown moral skeptic and I’ve doubted the existence of aesthetic truth for even longer. I distrust analogies as evidence and the fact that nobody has any reliable evidence in those fields is just another indication to me that no evidence is to be had.

    I’d also like to repeat my question from before: Assuming that there is such a thing as moral error and intuitions that give evidence about “true morality”, it doesn’t necessarily seem such a good idea to rely exclusively on one. Analogize our differing intuitions within our heads to different individuals: more precisely, experts as depicted by Tetlock. These experts are unreliable but the best we have. We think all of them are prone to error and overconfident in themselves. Wouldn’t trying to pick “the best” expect and listening exclusively to him/her be a mistake? How can we trust our own ability to determine which expert is best? Shouldn’t “the wisdom of crowds” help the random errors associated with only listening to single expert? If I recall correctly, call a friend gives worse results than asking the audience in Who Wants to Be a Millionaire.

  • Unnamed

    Robin, you see a correlation between how strongly a person wants an outcome and the intuitive goodness of that outcome. In order for this to be evidence for a moral rule similar to preference utilitarianism, it seems like this relationship must have the causal direction: an outcome is better because a person wants it more. But the opposite causal direction also seems plausible: a person wants an outcome more because it is a better outcome. In other words, you say that the pattern is “it is usually good for people do things to get what they want,” but the pattern could actually be “people usually want things that it is good to get.” For example, scratched itches are better than unscratched itches, so an itchy person wants to scratch.

  • TGGP, a moral skeptic can see moral talk as code for one common component of what people want. I’m not looking at one intuition – I’m looking at bazillions of case specific intuitions and inferring one simple pattern. In the limit of high noise, curve fitting is not done best by collecting a large “crowd” of patterns you might think you see if you squint your eyes and mind in various ways and averaging them together.

    mjgeddes, utility is an internal state which we infer from external behavior, because that’s usually all we have to go on.

    unnamed, I don’t see the relevance of a direction of causation.

  • Pingback: Overcoming Bias : Reply to Caplan()

  • Norman Maynard

    “Norman, bias is error; unreliability is from the expectation that there is error.”

    Bias is error, but not ‘noise’ in the sense of white noise, regardless of how large the standard deviation is. Dealing with unspecified movements in the standard deviation of errors and dealing with bias in the estimators are fundamentally different problems.

    It seems that your argument is equivalent to an empirical economic study which says “We are aware that our independent variables will be correlated with the error term, generating biased and inconsistent OLS estimates. However, since we are unsure of the source or form of this correlation and thus bias, we choose to use OLS estimates anyway.” If you were refereeing such a paper for a distinguished journal, would you really give the authors a pass?

  • Norman, “error” is not by definition “white”, i.e., independent. With correlated errors one should be all the more shy about assuming that patterns in noisy data correspond to real patterns behind the data.

  • mjgeddes

    >mjgeddes, utility is an internal state which we infer from external behavior, because that’s usually all we have to go on

    Economists often sound alarmingly behaviorist to me. Personal aesthetic sensibilities are not necesserily reducible to quantifiable ‘goods and services’.

    Trying to extract simple generalizations may may simply be the wrong approach – the fact that moral intuitions seem senstive to framing of specific situations may not indicate ‘noise’ as such, but rather be inherent in true morality itself, if in fact morality is genuinely highly sensitive to the specific context of each given situation. (i.e. complex and case-based, where no simple generalizations are possible).

    >I find mjgeddes’ self-aggrandizing hand-waving on analogy trumping Bayes to be annoying

    But analogy may well trump Bayes. Probability calculations rely on implicit universal generalizations, but a universal generalization intended to avoid ounterexamples must specify the exclusion of all such possible cases. This may not be possible for moral reasoning, which seems highly sensitive to the specific context of each case (see above). Thus analogical arguments can’t be reduced to inductive ones, and analogy beats Bayes.

    A universal generalization intended to avoid
    counterexamples must specify the exclusion of all such possible cases,
    not just the relevant features in the actual case.

  • You provide an interesting perspective on moral theory here. I raise an issue with the methodology over in the comments at the Experimental Philosophy blog. But I also have a worry about how your proceed even assuming that methodology.

    You propose to look at simple cases, e.g. picking up a pencil one has dropped, and think they support the minimal morality principle of doing what one thinks will get one what one (and others?) wants. The idea is supposed to be that our moral intuition here would be that it’s okay or morally good or morally permissible to pick the pencil up.

    (Note: In your response to Caplan, you say you’re not trying to develop principles of right action, only morally good outcomes. But this just seems inconsistent with what you say in the post. You frame all this in terms of what it would be “fine” or okay for the agent to do. But even if you reformulate your claims in terms of outcomes, your position here is supposed to be about moral theory in general. And assuming moral theory primarily involves outcomes is to assume, question-beggingly, some sort of consequentialism. Your points could be taken as providing some indirect argument for a sort of consequentialism, but it shouldn’t assume it.)

    Back to the case: But intuitions about this pencil case (and the others you initially discuss in support of the minimal principle) don’t at all seem to be moral intuitions. This is an issue of prudence (or something similar). So I don’t see how our judgment about such a case should provide any support for a moral principle. Perhaps it supports some sort of normative principle about what we have reason to do or what we ought to do, but the reasons here don’t at all seem to deal with morality. To construct and evaluate moral principles based on moral intuitions, we’d need to look at simple moral cases, e.g. kicking babies for fun. And there I suspect the best explanation of the moral judgment will not be a principle involving getting what you (and others?) want.

    (Note: I put “and others?” in there because it’s unclear to me whether you really take your minimal principle to be egoistic [doing what will satisfy the agent’s wants] or preference-utilitarian [doing what will satisfy the wants of most people]. You say you’re going for the latter at one point in the post, but the initial principle you develop by looking at the simple cases seems to be the former. And it’s a big jump to go from the one to the other.)

  • Unnamed

    If the direction of causality is that people want something because they judge it to be good, then your rule is essentially deferring to the judgments of others about what is good. Do you agree that your rule might be doing this, and not see the problem?

    One reason why this looks like a problem to me is that it means that you don’t have much of an explanation or moral theory – you’re not reducing goodness to anything more basic, just giving a rough cue for identifying it. A related issue is that your curve may only appear to be simple because you’re letting people’s minds do the complex work. Finally, if you’re going to incorporate people’s judgments about what is good, I don’t see a justification for picking out this subset of those judgments. Why not go all the way and make your rule “usually the things that people think are good really are good”? That pattern shows a similarly strong relationship (for the data points that you’ve chosen to focus on this rule is extremely highly correlated with your “wanting” rule), and it’s not clear which rule is simpler (if wanting something requires judging it to be good plus something else, then perhaps judging to be good is simpler).

  • Robin – I don’t see how your addition addresses my second concern. You write: “The simple pattern I see in this is that outcome goodness is increasing in how much each person wants that outcome” But the limited data you point to are equally consistent with the simple pattern that outcome goodness is increasing in how much the agent wants that outcome. So that’s no explanation at all of why you prefer the former pattern.

  • Richard, Joe usually has the intuition that it is morally better for Joe to get what he wants, if no one else cares and no special considerations apply, but Joe also usually has the intuition that it is morally better for Ann to get what she wants, if no one else cares and no special considerations apply.

    Unnamed, a simple pattern expressed in terms of complex creatures can still be a simple pattern. The pattern that exactly matches the data is “simple” in some sense, but not in the relevant curve-fitting sense.

    Josh, I really meant my clarification; I’m talking primarily about goodness. But once we do talk about rightness a candidate simple comprehensive pattern there is that our intuition is usually that acts that increase goodness most are the most right.

  • Robin,
    I wasn’t trying to say that you didn’t mean your clarification. It just doesn’t square well with what you say in other places of the post.

    But that wasn’t my main worry anyway. What do you think about the non-moral character of the pencil case and the other cases you base the minimal principle on? Shouldn’t the simple cases we’re basing the moral principle on be moral cases that we’d have moral intuitions about?

  • Psychohistorian

    The idea of an error rate requires the existence of an objective measure of accuracy. If there’s no objectively right answer, there can be no real error.

    Which would mean that error is simply how far your beliefs/actions deviate from your personal moral values. In which case, the least-error prone morality is that everyone should do whatever they happen to do regardless of anything, since your error will be zero.Or that you’re always right no matter what you do, though that doesn’t make other people moral.

    TLDR: thinking of morality in terms of minimizing error just doesn’t work.

  • Pingback: Overcoming Bias : Defending Mankiw()

  • Pingback: Overcoming Bias : Errors, Lies, and Self-Deception()

  • Pingback: Arché Methodology Project Weblog » philosophical evidence: psychologized, or merely (sometimes) psychological?()

  • Pingback: This is what deliberative democratists want « Entitled to an Opinion()