16 Comments

One probably-final comment--you say that

lack of external "guidance" - the absence of something to tell me what to think or do about my situation - is presumably what can make REAL moral dilemmas so excruciating.This puzzles me; I would have thought the "excruciating" quality came simply from an uncomfortably high probability of being disastrously wrong. Of course if (a) you had reliable external guidance, then the probability of being disastrously wrong goes down to zero, just as if (b) you had reliable internal guidance, or as if (c) the cost of being wrong were low in the first place. I wouldn't particularly emphasize (a) over (b) or (c); did you mean to do so, or were you considering them to be excluded by the terms of reference, or ?

Anyway, it might be good to think about real examples (this is a morning for asking for real examples, I guess; I'm working with Russian linguists and I mostly don't know what they're talking about, even when they switch into English for my sake.) If you google for the "man who saved the world", you get a variety of links to pages about Stanislav Petrov(1983) and some about Vasili Arkhipov(1962), two Soviet officers without whom we would not be having this discussion. Each had a choice without really adequate data: Arkhipov's sub was getting hit with US depth charges (not too close, it seems) and his co-captain seems to have thought it likely that a nuclear war was going on overhead. Petrov's computers told him about incoming US missiles. Each decided not to launch. Of course they may not have perceived it as a dilemma, we don't know that. But if you want to think about excruciating, I think these are more interesting examples than Sartre's. :-)

Expand full comment

Well, there might be survival advantage in a lot of things, but I think you've fairly well exposed the problem with "plumping" - I suppose we could call what you're referring to "post-plumping" (or as Hal succinctly referred to it, a self-serving bias).

When I originally introduced this idea of "plumping," I meant to refer to the problem one might have in making a decision (the frustration, or self-doubt involved) where reasons are indecisive. That lack of external "guidance" - the absence of something to tell me what to think or do about my situation - is presumably what can make REAL moral dilemmas so excruciating. (This is a point that Rue was concerned that I had unduly neglected.) But if I "convince myself" that the choice I make is the right one, that act of "convincing" (i.e. plumping) is not guided by a reason. Of course, that doesn't mean that my decision itself is irrational, or unjustified - since it seems that I may choose however I decide in a dilemma - but I can't take my *particular* course of action to have "universal validity." And if I (pre-)plump, I may delude myself into thinking that, not only did I do *something* right or good, but that I did the *only* right thing. (In thinking that, I seem to forget about the other possibility and that I could have equally well chosen it.)

BTW, Tom, thanks for the discussion.

Expand full comment

I'm glad of the skepticism agreement; that sentence seemed odd, but I never thought of full voice versus partial voice...(I've been misreading a lot today.)

On commitment, I expressed myself poorly (pretty common; in co-authored books, I've been the guy who writes most of the code and my co-author writes most of the English, which is funny 'cos his native language is Russian). Suppose as before that you are person P1 in situation S1 choosing between actions A1 and A2, but you are not confident of which (if either) is right or of which (if either) is wrong. There are at least two ways that group membership (in the end, partisanship) might interact with choice: <ul><li>before choice, you might ask "who do I want to be like, and what do they choose?"</li><li>after choice, you might ask "what have I joined, and what else does this commit me to?"</li></ul> I think you're talking about the first of these, which as you say is one kind of moral calculus, a way of finding out that your choice is not actually a dilemma at all. If the people you like/admire choose A1, go for it. (But do read Lying in Ponds.) I agree, but I was talking about the second. Suppose you've resolved a dilemma, i.e. you've made a choice where you didn't think that the right choice was clear. You are very likely to find yourself linked with others who chose similarly -- whether or not you ever thought about them. This is the moment of "plumping", as I understand it, and you can think about it as a second moral choice following the dilemma-choice: should you keep saying "well, I chose A1, and I'm probably not gonna think about it any more but I'm still not at all sure", or should you say "I have now joined the tribe of A1-choosers! Heretics beware!"? I think there may well be a survival advantage to the latter course, despite the consequent cost of sunk-cost reasoning, killing of heretics and blasphemers, and so forth. I suspect that some of what I remember from the appendix to Axelrod's Evolution of Cooperation could be relevant, if you're looking for stuff to read; my own approach would be to write a simulation, but not soon. Or you could look at the biological models: as the March 3 Science News puts it (p 139)

"As group size declines, life goes to hell in a hurry," Clutton-Brock says. "It's in everybody's interest to maintain group size."He's talking about meerkats, but the principle is pretty well universal; solitary wasps have rather restricted lifestyles.

Giving a program "ought" by explicitly telling it when to stop and when to go is an attractive notion that nobody has been able to make work very well. It works better to define rules; look at, umm, Boids, and think about any of the swarms of computer-generated critters you've seen in the movies...and about Axelrod. Critters, including simulated critters inside a computer or in a robot body, recognize others of their kind and have rules for dealing with them. More and more, year by year...

Expand full comment

Tom, the quote from my paper is not "full voice" until the last sentence, so of course I agree with your skepticism.

What you suggest about commitments seems right, but I would think that if you were committed to being (or becoming) a particular kind of person, and that commitment pointed strongly enough in favor of one of the two actions, then you wouldn't be in a moral dilemma. However, the future-directed bit might provide an interesting way to think about the decision (rather than fumbling around further with the questions, "what do I want to do?" or "what ought I do?")

The connection to sunk cost fallacies is VERY interesting. And one thing about the robots: at least for the time being, robots can't ask questions about their own moral commitments or future moral identities, so that's out for them. I don't know anything about programming, but it seems that giving a computer program "ought" just means telling it when to GO, when to STOP, and what exceptions there are to each of these commands in particular instances.

Expand full comment

Hm...well, I think the 1980s literature on moral dilemmas wasn't in the UPenn library when I played with deontic logic in the 70s. But maybe I just didn't look hard enough, I was mostly just trying to write a temporal logic theorem-prover for graphs representing parallel programs, and useful axioms for statements about "before" and "after" were sometimes more than I could handle; maybe they got mixed up. "Hasn't been written yet" shouldn't have been an obstacle, should it? :-) Anyway, if you think formal-"ought" deontic reasoning for armed robots is scary, just imagine armed robots without any formal sense of "ought" at all. It doesn't help to refuse to imagine armed robots; they exist, and they will get "better" at Moore's-Law rates. The Attack Of The Genius Robot Cockroach Swarm will come -- not soon, but it will come, and the humor will be rather dark.

So I would love to see some progress in ethical reasoning. Indeed, I expect to see some progress, but I don't really expect it from anything based on deontic logic.

I would find it hard to take seriously the ethical reasoning of your "commentator" who says that we ought to assume determinacy because otherwise, we might get complacent. In any given case, it may be that there is no best answer; it may be that there is an answer, but that there is no effective way to find the data to determine it; or that the data is readily available, but an infinite amount of computation is required to come up with it; or that the amount of computation is finite but excessive (as with my temporal-logic theorem-prover -- the system was decidable but my major prof at the time, Amir Pnueli, showed me a bug in my LISP, but the smallest formula for which the bug would make a difference would have required more memory than our late-70s computing system could provide.)

In your essay, you say that

The aim of morality, then, is to serve as a guide for action. Furthermore, it must guide us in a determinate way, clearly distinguishing right ways of conduct from wrong. But that does not imply that there is only one moral course of action in any given situation. I'm skeptical of that "furthermore", to put it mildly; I think it's applying "must" to a sometimes-impossible goal, which really looks like wishful thinking. I'd prefer something like "Furthermore, it (our moral reasoning process) must guide us in an effective way, so that our choices are likely to be better overall (i.e., result in a better world) than if we employed one of the available alternatives." There's a chance of getting that; I don't see how there's a chance of getting much more.

One thought about "plumping", though; you seem to describe it as making a commitment to a decision as an internal motivational issue, but it may not just be internal. Suppose you described it as making a commitment to being the kind of person who (you think) would make that decision. After Sartre's young man decides to go to war or decides to take care of Mom, he will automatically find a social process wherein other soldiers or other caretakers will be his allies -- his tribe? Your identity has a lot to do with group membership, and "plumping" may be a good survival strategy. To understand plumping, it might be good to look at evolutionary psychology. (In fact, sunk cost fallacies might derive from a evolved plumping strategy mechanism in your brain, and this might be testable -- in the multiple-personae of the aforementioned Genius Robot Cockroach Swarm. Wouldn't that be cool?)

Expand full comment

FYI, there are two commenters here with the name "Matthew" on previous comments. I am changing my mine to Matthew C to avoid confusion.

Expand full comment

Robots using deontic logic: hilarious and terrifying. I was mildly obsessed with deontic logic for awhile (but recovered, thankfully), because some of the older literature on moral dilemmas (1980s) (presumably, the kind of moral philosophy Rue was disparaging) applies it heavily, and some people argued on deontic grounds that real moral dilemmas are impossible because they generate a contradiction, or at least, a problem (that I ought to do what I can't - I suppose that would freak a robot out, too). There appeared to be a presumption against the possibility of real indeterminacy (that all indeterminacy of choice is merely apparent, and so if we had more information the appearance of a real dilemma would disappear). I gave a paper (part of my in progress dissertation) that argues against such determinacy, and my commentator basically said, "That's a nice idea, but we should hold out for determinacy - assume we are ignorant - or we might become morally complacent and stop seeking better reasons."

But (and Rue should appreciate this) such a response just avoids the issue: how DO we decide in the face of real indeterminacy; it certainly is a possibility that two options could be equally "weighted" on the most ideal framework out there. We've both mentioned coin-tosses, and in a sense, whether you decide that way or through some other similar process, the result is the same: you have to decide without (further) reason. It seems to me that some people don't like (or feel existentially threatened) by such a possibility; Sartre's case (in the essay I linked to) of the young man who comes to him asking for moral advice might illustrate this, for Sartre simply tells the young man: it doesn't matter what moral system you adopt, none of them are going to TELL you what to do in this kind of situation. Gee, thanks, Jean-Paul. (Compare this to students who "just want to be told what the answer is.")

The paper I mentioned is available here: http://comp.uark.edu/~mpian...

Expand full comment

To refocus what I was saying, it seems to me (and you seem to agree) that "universalizability" as you describe it puts the quantifier in a bad place for the context of dilemma-resolution, saying "now that I've decided for A1 not A2, everybody should agree with my decision -- even if they already used my decision-making procedure, which would have gone [did go] the other way for a very slight perturbation of my input parameters, or might with some probability have gone [did go] the other way for identical input parameters."

In other words, universalizability puts your moral focus on disagreements for which there were and are, by hypothesis, no good reason, and ignores any agreements or disagreements there may be in your actual moral reasoning. The only justification I see you reporting for this oddity (I would say "absurdity", but only to indicate the extent of my non-comprehension of a "classic principle" here) is motivation: "plumping enough to motivate ourselves to choose where the reasons themselves can’t make the choice for us." Huh? I have many times in my life made a decision based on a coin-flip...haven't you?

I don't have a background in ethics, but long ago as a computer science grad student I spent time with deontic logic, the idea being that we needed some kind of framework for having a computer system -- especially a robotic one -- decide what to do. It didn't work very well, I don't expect it to work very well, but I think the problems are becoming more urgent. It looks to me like Moore's Law thinking is now appropriate for robotic developments, and I'm seeing more and more of them like the VIPeR

Portable Combat Robot ... can also be configured with weapons capability comprising a 9 mm mini-Uzi with scope and pointer, or grenade launcher." Some kind of a framework for making choices is needed, somewhere within the next millionfold improvement, whether that's exactly thirty years or not. I don't think universalizability as you describe it is likely to be part of that framework. I do think that evolutionary analysis -- genetic algorithms on choice-making procedures -- are likely to be part of it, and it might be interesting to think about how (whether?) "plumping" might come in...Hmm.

Expand full comment

Rue, I certainly understand your animosity (the "nothing ever gets done in academic papers" objection that makes me question what I'm doing *nearly* every time I attend a talk), but I don't think "This is good" or appreciate being made to answer (or held to blame) for the whole of "moral philosophy."

Your last question is a good one. One view I've been exploring is that choices (or moral judgments) might "inherit" truth from their being made under certain limiting conditions (I have to think that something *moral* is at stake, rather than a *mere* matter of taste, and that I'm not making an idiosyncratic (or biased) judgment)-that's all subjective-sounding criteria because I'm trying to take subjectivism seriously.

The difficulty I have with saying that "choice is a moral truth maker" is that, for example, in the moral dilemma case where both choices are equally weighted (and so I have good reasons for doing either), there's a gap between reasons and the choice - now, I suppose Rhees tends to think (and maybe Sartre does, too) that this gap is there more often than we (moral philosophers) think. At this point, I'm tempted to say, well, then we must decide without truth, because reasons (which I think can be true although not decisive) have taken us as far as they can. The person faced with such a choice is operating within the realm of "moral intelligibility" (whatever she chooses), and as I suggested above, the only reason I can see for saying that such choices are true (or express the claim that 'X is the right thing for me to do') has to do with motivating myself to act in the face of a real quagmire: well, why not just flip a coin to decide? (Can I stake a morally significant decision on a coin toss? Am I deluded to think - if I really can't decide - that anything else I could do would be different than flipping a coin? Am I shirking responsibility by flipping a coin, since later on, I can always blame the coin for how poorly things turned out for me?)

Expand full comment

Matthew

Now you're practicing my brand of moral philososphy: ad hominem, no law in the arena. You question my reading abilities and my sensitivity to philosophical questions. I question your career focus. This is good.

You're not going to get that at an academic conference.

The whole reasons for action argument is a dead end. Starting with Williams' external internal reasons paper, Blackburn's drivel, Scanlon's ponderous tome, Korsgaard's righteousness and now with Parfit's soon to be released book, the whole lot is moving in the wrong direction. Perhaps you agree with this. If you sympathise with Rhees, then that might be so. But I find the whole debate a rather tedious translation of an already existing problem, that is, whether there are there objective moral truths. They make no advance, not even a marginal development on an already existing model. And yet they have tenure and genius grants.

Do you want to say that choice is a moral truth maker?

Expand full comment

Tom, yes, that's the idea. Also, the connection with what you say about "assigning blame" on the basis of a person's believing that her choice is better is part of the problem I was trying to point to at the end by mentioning this idea of "plumping" - do I really *need* to make myself believe that one option is better in order to choose (especially when all information points to equal weight)? And what kind of bias (against other-choosers) is that "plumping" going to result in?

Rue, you could try reading my post. The whole post leans toward skepticism about universalizability; you clearly missed that point. (Were you just assuming that since I have an "academic tone" that I just accept the status quo principles?) I said '"IF" the universalizability principle holds, THEN... ... ...' And, I meant to be implying, trouble lies therein.

And while I don't go in for 'Radical' Sartreanism, I have spent a lot of time thinking about his view, and take quite a bit of what he suggests seriously. You do, however, seem right about the "weakness" of the universalizability principle in relation to Sartre, and I realize he's suggesting that no two situations are exactly the same. I have a limited number of words to make a few rough points in this post, and I probably should have omitted Sartre, for reasons of space. (But it seemed to complement the ideas from Rhees that primarily puzzled me - and by puzzled, I didn't mean "THINK THEY'RE WRONG" - I meant, "Wow, that's interesting but I wonder how exactly to make sense of it as a true view..." The post was an attempt to start puzzling that out...

Expand full comment

Let me announce my bias upfront. I think your subject quaint at best and inimical to living at worst. Moral philosophy is a waste of time and money. Its funding ought to be cut. And I am biased against this style of thinking, namely, that you believe talking about a moral dilemma is the proper way to study the contours of the dilemma experience. In short, the information you assume we act on is a gross simplification of a much richer reality.

All that said, let's look at some of your weak points.

What are the origins of "universalizability"? Not in intellectual history, but rather in causal terms. Hume struck at the foundations of inductive reasoning about causal fact and that tree tottered long ago, so there's no reason, certainly not in your argument here, to believe that what doesn't hold in the comparable case holds here. If you say universalizability is property shared by all right acts, I want to know: what is the cause of this property? And why should I grant it authority over me? Because it did what exactly in the past?

Next, let's assume that universalizability is a property all morally right acts have. Still, you cannot claim that Sartre's view violates that principle. A radical Sartre would claim:

(a) the rightness of a person's action varies with or is relative to the choice that person makes.(b) that person's choice is morally superior to anyone else's choice.

If (a) and (b) hold for everyone, then it passes the universalizablity test. And yet, I am will to predict from your academic tone alone that you find this radical Sartrean view repugnant.

The weakness in your set up is suggesting that universalizability requires someone sufficiently like P in sufficiently alike situation S to perform act A. But this begs the question. This is the question. Don't dodge it. Sartre is claiming that no two situations are the same. Sufficiently alike is too weak a condition to universalize. To be fair to Sartre, you have to emend it to "identically".

Now lastly, let's say your sufficiently alike thesis holds. How can you be sure that you have this knowledge? That two situations are sufficiently alike?

Expand full comment

If person P1 in situation S1 chooses action A1, and similar P2 in similar S2 chooses dissimilar A2, they may be using the exact same choice-function C(P,S)=A, one that happens to be non-linear. Trivial example: suppose you have a decision to make which involves what you think next Monday's weather will most likely be. An extremely small difference in your starting values can make a large difference in the result. (The rain it falleth on the just / and also on the unjust fella; / but mostly on the just, because / the unjust man stole his umbrella.) If you don't like that sort of quasi-determinism, consider NP-hard decision problems where a small change in the cost of traversing a single arc can change every choice to be made in optimizing a traveling salesperson's problem. (But don't ask me for an example; it's been years.)

These suggest to me that universalizability is of limited use; it assigns blame based on the determination of a value, i.e. on an implied computation, whether or not that computation can be performed by the agent you're blaming, and it makes P1 blame P2 even if P1 had very little confidence that his choice was better. I'd rather back off from C as an implicit choice-function anyway, and replace it with an explicit cost-evaluator: since P1 presumably thinks that the A1 in S1 is less bad than A2 in S1, let us ask him

please compute for me the badness B[i][j] of A[i] in S[j] for i,j=1,2;

now look at how different those actually are, and see if there is a major disagreement. If there is, then try to find a way to ask P1 to compute the badness-distribution B[i][j](b)= his confidence that B[i][j] will achieve badness of at least "b". But I dunno...maybe this is not of interest. Delete if it's really boring.

Expand full comment

do people still believe in ethics? huh, go figure.

Expand full comment

Hal, this gets into the problem that it can seem, in a moral dilemma, that "you're damned if you do, and damned if you don't." I certainly realize that there's something odd about Rhees' remarks - which is why I'm puzzled.

Let's change the case to one where I'm considering two different people in similar situations who choose differently. The point is that if I accept the universalizability principle, I would be under pressure to judge that one of them did the wrong thing. But if the reasons are equally weighted, there seems no reason to prefer one choice over the other, and so I CAN'T apply the universalizability principle. Perhaps this means we have to drop talk of "the right thing to do," and maybe even of what I "ought" to do. (All we can say is, I ought to do one thing or the other.) This means that if we think all moral decisions - I mean, the *specific* course of action chosen - are universalizable, we seem to make an assumption that WOULD lead to unjustifiable criticism (or, bias).

Also, I just meant to inform other members that I'm approaching these issues form a background in ethics. (I suppose the post makes that obvious).

Expand full comment

I will assume that by posting a blog entry in a forum like this, you are inviting comments by laymen, who are not expert in the fields you study.

"Suppose, then, that I know of someone else who chose differently and yet was in a situation much like my own, and who was similar in character and values. If the universalizability principle were correct, then it would seem that I would be justified in criticizing such a person as having chosen (and acted) immorally."

Wouldn't you be equally justified in criticizing yourself for having made the wrong choice? Why assume that you were right and he was wrong? This kind of self-centered view is one of the main types of bias we have discussed here.

Expand full comment