Does the Modesty Argument Apply to Moral Claims?

In “Enhancing Our Truth Orientation,” Robin argues that Aumann’s theorem applies to moral claims. I’m very skeptical of this position, primarily because there does not seem to be a plausible way to translate moral positions into the kinds of probability judgments suitable for Bayesian reasoning.

What reason do we have to believe that moral positions can be understood as subjective probabilities? Is there anyone who genuinely believes that, say, deontology is true with a probability of .7, virtue ethics with a probability of .299, and utilitarianism with a probability of .001? Or that it’s 35% likely to be true that you can’t lie to the murderer at the door? (Kant’s infamous case.) Does it even make sense to say that? Is it at all coherent? What might it mean to utter the statement “there is a .35 probability of it being wrong to lie to the murderer at the door?”

Here’s what it can’t mean: “If you lie to the murderer at the door 100 otherwise identical times, you can expect to have violated the moral law 35 of those times.” Nobody in their right mind would make that sort of claim. If you utter that statement, you’ve stopped talking about morals and started talking about facts: If it’s wrong to lie to the murderer one time, it’s wrong to lie to the murderer all other times, unless the facts — rather than the values — changed. (I’m ruling out some sort of extreme moral skepticism here, since if you’re that much of a moral skeptic, you shouldn’t be making statements about probabilities of moral conduct at all.)

Here’s what it also can’t mean: “I’m pretty sure it’s ok to lie to the murderer at the door.” That’s not a probability statement. Not even to a Bayesian. (Eliezer’s “technical explanation of technical explanation” nicely explains why — in the context of Star Trek, no less.) Even if that was a statement about probability, it’s implausible to think that the confidence one has in one’s moral claims could be expressed in numbers. How would that work? “I think Rawls’s theory of justice makes sense, except I’m not really sure about his claim that it should be limited to the basic structure of society. That’s about 28.47% of his argument, so I guess I’m a Rawlsian with .7153 probability.” What does that mean? Why isn’t the basic structure limitation 98.4% of his argument, or .00018% of his argument? How do you get an objective measure of that amount? Do you count sentences?

Moreover, even if you accept the notion that Bayesian reasoning can be extended to non-numeric estimates of uncertainty, it’s still really problematic to apply it to normative claims. For one thing, there’s still no objective rule describing how we might reconcile weights. If I think Bernard Williams’s character/integrity argument cases a lot of doubt on utilitarianism, while you think it only casts a little doubt on utilitarianism, on what basis are we suppose to discuss the differences we have between “a lot” and “a little?” I think what it ultimately comes down to is that the “a lot” versus “a little” distinction is a judgment, in the Kantian sense, and not one that can be described by rules. We can’t ever get to common priors on that, because the “prior” is the exercise of a sui generis intellectual faculty.

Furthermore, moral claims are supposed to lead to action, and it makes little sense for this action to be discounted by probability in most cases — not even if there happened to be some kind of probability distribution over moral arguments. Suppose a pro-choicer and a pro-lifer got together and realized their differences came down to the question of whether the woman’s right to bodily integrity trumped the fetus’s right to the potentiality of life or not. Now suppose they’re both Bayesians with common priors and so forth, and so they mutually adjust their probability of bodily integrity trumping potentiality of life to .5. What does this mean in terms of action? Suppose they’re legislators (and no, they can’t default to the status quo, since that reflects a prior moral judgment that’s now problematic) — do they both have to vote for a bill that says that 50% of abortions are now legal? If a woman wants an abortion, must she flip a coin to see whether she gets one? That’s a position that everyone would find unacceptable.

I submit that the only possible subjective probability evaulations for moral claims are 0, 1, and “undetermined.” I further submit that “undetermined” is quite useless when one has to make a decision on a moral question. Consequently, Bayesian reasoners don’t have the capacity to adjust their probability judgments toward each other, and the modesty argument can not apply.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Paul, I submit that the only issue here is whether or not we can imagine “moral possibilities.” Once you let me describe such possibilities, then you cannot escape letting me form probabilities distributions over them. And in fact I don’t see any other way for you to express the idea that you have moral uncertainty.

    I’ll ask you as I asked Nick S., what do you think of the idea of “impossible possible worlds” as a way to represent reasoning about logical truths. If you’ll grant those it seems a short step to moral possibilities.

    Nick B. was thinking about moral possibilities last summer; perhaps he’ll weigh in.

  • Carl Shulman

    Eli’s concept of (individual) extrapolated volition would be one way to get a useful probability estimate. “If my IQ were bumped up 50 points, my self-serving bias was damped down, I spent 10 years thinking about the problem, etc, what would my moral opinion be?” Of course, this requires that you have certain meta-ethical beliefs and that your process of moral decision making not be a chaotic system, but it does let you translate a moral dilemma to a physical problem.

    For the individual uncertain about the moral status of embryos (or insects, or fictional characters), one can try to maximize the expected satisfaction of your ideal self with the choice, e.g. encourage other forms of birth control, etc, and permit abortion where bodily integrity interests are strong.

    Personally, I’m not certain what fidelity of simulation would be required to make VR entities’ ‘suffering’ morally relevant, but I will avoid creating medium-fi Hells, even if this would be amusing to people who I am quite certain have moral standing.

  • TGGP

    I’ve never heard anything from proponents of it (although apparently Leland Yeager used to be one), but the only ethical theory that makes a lick of sense to me is emotivism. The subjective-objective or normative-positive distinction just seems like too huge a gap. So what’s a good defense of this ethics & philosophy business that takes emotivism behind the shed and gives its hide a good tanning?

  • My understanding of the interpretation of probabilities in cases like this is as follows. Ideally if I judge something to have a 35% probability, whether it is the morality of lying to a murderer or something prosaic like the chance that it will rain tomorrow, I mean that I expect to be right 35% of the time that I have that strong a feeling.

  • Paul, I share your worry about the problem; but I would not declare it unsolvable. In my view, it makes sense to construct probability distributions over moral judgments wherever we can uncover a *dependency* of that moral judgment on a question of fact.

    Or in simpler language, “Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts. If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.” This view lets us talk sensible about probability distributions over “correct” emotions – by contagion of the probability distributions we have over correct facts. Since moral judgments, in my view, legitimately derive partially from emotions, by contagion of probability we have probabilities over moral judgments.

    And of course many moral judgments depend more directly upon facts – for example, your $FavoritePoliticalSystem is probably supported, in your view, by your beliefs about the consequences of implementing $FavoritePoliticalSystem.

    For bounded rationalists, questions of “fact” can include questions about the unknown results of known computations, as well as questions about environmental variables. So we can also talk about probability distributions over the limit of a logic perfectly applied – but it does have to be the same logic in all cases.

    But I do agree with Paul to the extent of holding that the probabilistic status of a moral claim has to be, as it were, discovered – you can’t take it for granted. You can’t just apply Aumann’s Agreement Theorem to a disagreement between two parties who “feel strongly that they’re both right”, just because the human brain happens to reuse the same subjective sensation of “rightness” for probability judgments and political arguments. You’ve got to show that the disagreement, or some important facet of the disagreement, could not persist if both parties possessed perfectly veridical views of the universe. (This is straightforward enough for most political arguments, and quite a few moral ones.)

    It is worthwhile to distinguish between moral disagreements between humans, and moral disagreements between humans and aliens. It is reasonable to suppose shared human premises by reason of shared neural architecture. Putting a probability distribution over the differing goals of humans and Andromedan babyeaters would be much more problematic.

  • conchis

    I’m with Robin on this one. On the action point, you might want to check out Ted Lockhart’s book “Moral Uncertainty and its Consequences” which grapples with precisely the issue of how to make decisions when we are uncertain of the truth of competing moral claims – a matter which, incidentally, deserves a lot more attention than it gets.

  • Paul and Eliezer, what exactly do you think goes wrong in describing moral uncertainty in terms of moral possibilities and a probability distribution over them? You must admit the formalism works just fine; it doesn’t care. Why do you care? Nick S. similarly talked about probabilities only applying to “truth tracking” situations. You guys seem to think the problem is obvious, but I don’t see it. Can you help?

  • Robin, you could write down a list of mutually incompatible moral propositions, such as “Killing people is bad” and “Killing people is good”, and you could attach numbers to each proposition, like 0.2 and 0.3. The question is whether you can legitimately call these propositions “possible moral worlds”, and the numbers “probabilities”. I don’t necessarily answer in the negative, but it’s not a trivial problem.

    Incompatible moral propositions don’t directly correspond to different *worlds* because, no matter how you arrange the atoms in a solar system, that’s not going to make it okay to kill people. Try to imagine a world that is exactly like this world – the same people, the same brain states, all the atoms in the same place, all fixed computations have the same outputs, all existing philosophers including you have written the same words in their essays – only in that world, killing people is good, instead of evil. If you call moral propositions worlds, then how do you tell which world you’re in?

    Similarly, probabilities themselves are not direct, ontological attributes of observed reality. If we don’t know whether our shoelaces are tied, that is a fact about our state of mind, not a fact about the shoelaces. (Jaynes named this the Mind Projection Fallacy.) So, just because you feel a state of uncertainty about what “probability” you ought to assign to something, doesn’t mean that “probability” is a variable-in-the-world. You could, nontrivially, state an ideal computation that you ought to be doing with the evidence you have, and then wonder about the unknown result of this fixed computation. For example, you might guess there’s a 10% probability that this ideal computation would output an answer between “90%” and “95%” that your shoelaces are tied. But, in real life, either this computation outputs “92%” or it doesn’t. And even if it is a definite fact that the computation outputs “92%”, that doesn’t mean there *really is* a 92% chance that your shoelaces are tied. Either they’re tied or they’re not. The actual world is not one in which the shoelaces have a little number 0.92 woven into the threads. The actual world is one in which a certain fixed computation has the definite output of 0.92, and your shoelaces are actually tied.

    If you want to put probability distributions over probabilities, you’ve got to do some additional work to justify putting a probability distribution on something that isn’t part of the world. You might, for example, introduce a fixed computation whose unknown answer you’re putting a probability distribution over. But it is not permissible to just make the jump directly, based on the fact that you *feel unsure* about what probability you ought to assign. You can’t trust your brain – in this case, as in so many others. Your brain has a tendency to collapse its own perceptual states into the objects that they refer to. We say, “The coin has a 50% probability of coming up heads,” not “I assign a 50% probability to the coin coming up heads”. Evolution saving a few clock ticks again.

    Likewise it takes some extra work to build a framework for moral possible worlds. By treating moral propositions as possible worlds, you are essentially *formalizing* that case of the Mind Projection Fallacy, which may not be a wise course of action. It may lead you to forget that not all minds are in the same frame of reference that you happen to use, the way that all minds you meet will be in the same real world as you. If you try to treat with an entity that doesn’t share your resolution procedure – Andromedan babyeaters, or a paperclip maximizer – then there’s no single variable for the two of you to both be uncertain about.

  • Eliezer, I didn’t say probability is a “variable-in-the-world,” nor did I talk about putting probability distributions over probabilities. And I don’t see why we need an algorithm to “tell which world you are in”; sometimes you can’t tell. You seem to be stuck on thinking of possibilities as “how you arrange the atoms in a solar system,” but there are many other kinds of possibilities (which I’ll post on soon). In particular I’ve been trying to call attention to “impossible possible worlds,” which are usually inconsistent descriptions. Why not let us reason using those?

  • Paul Gowder

    There’s a lot of stuff to reply to here, and I’m away from home at the moment, so it might take me a while for a full response to any of it. For now, let me offer one elaboration about why probability distributions are particularly inappropriate in the case of moral claims. (Additional elaborations/more considerations to follow.)

    One distinction between factual claims and moral claims is that uncertainty about factual claims can be accompanied with evidence of the type that is particularly suggestive of probability judgments. For example, if I’m uncertain whether Joe is a liberal, evidence that Joe is an academic is useful evidence. Thus, we often describe the effect of that evidence in terms of conditional probabilities. If I know that 85% of academics are liberals, I have some real world, non-arbitrary basis for assigning the probability .85 that Joe is a liberal once I find out that he’s an academic. But there’s no similar basis for making such an assignment to moral claims, because there’s no factual evidence for moral claims. If we were to describe conditional probabilities for moral claims, we would be very hard-pressed indeed to say what it was that we were conditioning on, and to point to something observable in the world that corresponded to same.

    But without some observable evidence to condition on, it’s hard to see why any given number is anything but a random stab. I would challenge anyone who assigns a numerical (or even some kind of merely ordinal “moral claim A is more likely than moral claim B,” where A and B are not just mutually exclusive) probability assignment to a claim to give reasons why it isn’t some other number. In particular, I’d challenge anyone who has a stated probability above .5 to explain the difference between .51 and .85 and .9999 and justify their specific choice.

    I think Hal’s suggestion on this one is extraordinarily clever — perhaps we can condition on the strength of our moral sentiment/feeling/intuition. However, it still remains to be seen whether there’s some ex post way of determining how right our moral judgment was after we observe the intuition, so as to have some basis for this probability assignment. I suppose we could observe our intuitions and judge how often they match up with rational arguments. But note that this procedure would equate having a rational argument to having certain knowledge of the truth of a proposition, and I can’t see how two people, both of whom had rational arguments, could ever reconcile their opposing views of a moral claim via a Bayesian procedure. Nonetheless, I hasten to concede to Hal that if one’s only reason for believing a moral claim is one’s intuition, it might be reasonable to assign a probability to the correctness of one’s intuitions generally.

  • Paul Gowder

    Robin: Could you elaborate on your use of impossible possible worlds? I’m not quite clear on how you mean to deploy that. Is the idea that we’d assign probabilities to the prospect of being in world X where our moral reasoning is false because of some, e.g., logical factor that we haven’t considered? If so, I’d again raise the arbitrariness question with reference to assigning specific probabilities (or semi-specific ones) to those worlds. But I doubt that’s exactly what you’re getting at, so I’ll await more.

  • Robin, I think you probably have good reasons for your conclusions about moral claims, but your explanation here puzzles me. I’m guessing at your reasons by using information from conversations more than from anything you’ve written publicly.
    If I understand your position, it needs to include the claim that moral claims can be reduced to factual claims (e.g. adopting rule X is Pareto-superior to not adopting it), and we can apply truth values to factual claims.
    I find it plausible but counterintuitive that moral claims can be translated into factual claims, and I think it would take a lot of hard-to-write examples to convince the average person of this. I have some doubts about how far we can go toward making all moral claims into factual claims. And I think many people will assume you can’t be making claims of this nature unless you are more explicit about it than you have been.
    I also think people often reject this approach to moral claims because they overestimate what moral claims can accomplish, and translating moral claims into unambiguous factual claims would conflict with this overconfidence. For example, they expect that the right morality would have prevented Europeans from conquering the prior inhabitants of the Americas, and if the best imaginable factual arguments wouldn’t have convinced the Europeans to avoid conquest, then they use something on the order of superstition to rationalize the existence of a morality that would have stopped the Europeans.
    I hope someone more eloquent than I can turn these ideas into convincing arguments.
    If I misunderstood you and you really think (as your comments in this thread might be interpreted) that Aumann’s theorem applies to claims that aren’t about facts and truth values, then I’m very puzzled.

  • Paul and Peter, I think I should write a new post that better expresses my opinion. Expect in in a few days.

  • My post “Why Not Impossible Worlds” appears today.

  • Robin, if we are to apply probability theory to moral claims in a nontrivial way, there has to be correlations between moral possibilities and our sensory perceptions, otherwise Bayesian updating becomes a null operation. But such correlations seem untenable since our sensory perceptions are determined by physics, and physics is independent of morality. The atoms in my brain and the universe in general will do the same things whether “killing is good” or “killing is bad”, so nothing I can perceive can possibly provide any evidence as to which is the case.

    “Impossible possible worlds” doesn’t suffer from this problem.