First, let me thank Robin Hanson for inviting me to Overcoming Bias. In this first post of mine, let me present a puzzle that has some bearing on both my area of study, moral theory, and overcoming bias:
Rush Rhees claimed that “in matters of morals it is not reasons which decide the issue” (“Deciding What I Ought to Do” in Moral Questions (1999)). I find this statement remarkably puzzling, and Rhees recognizes that this is an uncomfortable way to express his point, since he admits that reasons are relevant to moral decision-making. (He is not a moral relativist or subjectivist in any obvious way.) The question, then, is what does decide the issue? Rhees’ view is that our moral decisions are inherently personal, a point that he emphasizes by declaring, “Only I can decide what I ought to do.”
At the very least, Rhees’ claims appear to make sense of moral dilemmas. These are situations in which a person has equally good reasons for choosing either of two courses of action (or where there is an incommensurability between the reasons – these cases are harder and more theoretically contentious). If we were to find ourselves in a difficult moral dilemma, we might begin to understand why Rhees claims that reasons don’t (and can’t) decide the issue, and that only we ourselves can decide what we ought to do.
Some theorists might object that there can be no question about what I ought to do in a dilemma, except that I ought to do something, and that only I can decide what I will do. But since I ought to do something and, presumably, ought not sit on my hands and refuse to make a choice, it makes sense (for me as the chooser) to regard my decision as what I ought to do. (At least, it makes sense to think that I am doing something right.) Jean-Paul Sartre claimed that in making such a decision, I create a moral truth (see “Existentialism is a Humanism”). Supposing that Rhees and Sartre are correct, however, the so-called “truth” I create can only be something that is “true for me” alone, and this violates the classic universalizability principle, which says that if act A is right for person P to do in situation S, then A is the right action for anyone sufficiently like P in a situation sufficiently like S.
Suppose, then, that I know of someone else who chose differently and yet was in a situation much like my own, and who was similar in character and values. If the universalizability principle were correct, then it would seem that I would be justified in criticizing such a person as having chosen (and acted) immorally. But if Rhees (and Sartre) are correct, then it seems my criticism would be unjustified – I would be advancing a criticism of a choice that no one but the person who made the choice was in a position to deem as that which he or she ought to do.
One way to solve this puzzle is to claim that in moral dilemmas, there is no such thing as “the right thing to do” (where this picks out one course of action or the other), and so the universalizability principle simply fails to apply to the specific choices made within moral dilemmas. But this itself is an important result. For if, in making a choice within a dilemma, I do something morally good, then some moral choices are not universalizable. This would further imply that I cannot rationally criticize people in similar situations who chose other than I do, and that full-blown commitment to the universalizability principle – a crux of traditional ethical theorizing – can give rise to bias when applied to our own practical decisions.
This is important because, as Simon Blackburn notes, we may have a tendency to “plump” in the face of dilemmas, such that we make our decision look (or feel) like the only one – read: the only moral choice – we could have made (see "Dilemmas, Dithering, Plumping, and Grief" in Moral Dilemmas and Moral Theory, ed. Mason (1996)). One challenge of choosing in a moral dilemma is to choose without “plumping” so much that we blind ourselves to the fact that others may rationally (and, morally) choose the other course of action, while still plumping enough to motivate ourselves to choose where the reasons themselves can’t make the choice for us.
One probably-final comment--you say that
lack of external "guidance" - the absence of something to tell me what to think or do about my situation - is presumably what can make REAL moral dilemmas so excruciating.This puzzles me; I would have thought the "excruciating" quality came simply from an uncomfortably high probability of being disastrously wrong. Of course if (a) you had reliable external guidance, then the probability of being disastrously wrong goes down to zero, just as if (b) you had reliable internal guidance, or as if (c) the cost of being wrong were low in the first place. I wouldn't particularly emphasize (a) over (b) or (c); did you mean to do so, or were you considering them to be excluded by the terms of reference, or ?
Anyway, it might be good to think about real examples (this is a morning for asking for real examples, I guess; I'm working with Russian linguists and I mostly don't know what they're talking about, even when they switch into English for my sake.) If you google for the "man who saved the world", you get a variety of links to pages about Stanislav Petrov(1983) and some about Vasili Arkhipov(1962), two Soviet officers without whom we would not be having this discussion. Each had a choice without really adequate data: Arkhipov's sub was getting hit with US depth charges (not too close, it seems) and his co-captain seems to have thought it likely that a nuclear war was going on overhead. Petrov's computers told him about incoming US missiles. Each decided not to launch. Of course they may not have perceived it as a dilemma, we don't know that. But if you want to think about excruciating, I think these are more interesting examples than Sartre's. :-)
Well, there might be survival advantage in a lot of things, but I think you've fairly well exposed the problem with "plumping" - I suppose we could call what you're referring to "post-plumping" (or as Hal succinctly referred to it, a self-serving bias).
When I originally introduced this idea of "plumping," I meant to refer to the problem one might have in making a decision (the frustration, or self-doubt involved) where reasons are indecisive. That lack of external "guidance" - the absence of something to tell me what to think or do about my situation - is presumably what can make REAL moral dilemmas so excruciating. (This is a point that Rue was concerned that I had unduly neglected.) But if I "convince myself" that the choice I make is the right one, that act of "convincing" (i.e. plumping) is not guided by a reason. Of course, that doesn't mean that my decision itself is irrational, or unjustified - since it seems that I may choose however I decide in a dilemma - but I can't take my *particular* course of action to have "universal validity." And if I (pre-)plump, I may delude myself into thinking that, not only did I do *something* right or good, but that I did the *only* right thing. (In thinking that, I seem to forget about the other possibility and that I could have equally well chosen it.)
BTW, Tom, thanks for the discussion.