15 Comments

Robin, if we are to apply probability theory to moral claims in a nontrivial way, there has to be correlations between moral possibilities and our sensory perceptions, otherwise Bayesian updating becomes a null operation. But such correlations seem untenable since our sensory perceptions are determined by physics, and physics is independent of morality. The atoms in my brain and the universe in general will do the same things whether "killing is good" or "killing is bad", so nothing I can perceive can possibly provide any evidence as to which is the case.

"Impossible possible worlds" doesn't suffer from this problem.

Expand full comment

My post "Why Not Impossible Worlds" appears today.

Expand full comment

Paul and Peter, I think I should write a new post that better expresses my opinion. Expect in in a few days.

Expand full comment

Robin, I think you probably have good reasons for your conclusions about moral claims, but your explanation here puzzles me. I'm guessing at your reasons by using information from conversations more than from anything you've written publicly.If I understand your position, it needs to include the claim that moral claims can be reduced to factual claims (e.g. adopting rule X is Pareto-superior to not adopting it), and we can apply truth values to factual claims.I find it plausible but counterintuitive that moral claims can be translated into factual claims, and I think it would take a lot of hard-to-write examples to convince the average person of this. I have some doubts about how far we can go toward making all moral claims into factual claims. And I think many people will assume you can't be making claims of this nature unless you are more explicit about it than you have been.I also think people often reject this approach to moral claims because they overestimate what moral claims can accomplish, and translating moral claims into unambiguous factual claims would conflict with this overconfidence. For example, they expect that the right morality would have prevented Europeans from conquering the prior inhabitants of the Americas, and if the best imaginable factual arguments wouldn't have convinced the Europeans to avoid conquest, then they use something on the order of superstition to rationalize the existence of a morality that would have stopped the Europeans.I hope someone more eloquent than I can turn these ideas into convincing arguments.If I misunderstood you and you really think (as your comments in this thread might be interpreted) that Aumann's theorem applies to claims that aren't about facts and truth values, then I'm very puzzled.

Expand full comment

Robin: Could you elaborate on your use of impossible possible worlds? I'm not quite clear on how you mean to deploy that. Is the idea that we'd assign probabilities to the prospect of being in world X where our moral reasoning is false because of some, e.g., logical factor that we haven't considered? If so, I'd again raise the arbitrariness question with reference to assigning specific probabilities (or semi-specific ones) to those worlds. But I doubt that's exactly what you're getting at, so I'll await more.

Expand full comment

There's a lot of stuff to reply to here, and I'm away from home at the moment, so it might take me a while for a full response to any of it. For now, let me offer one elaboration about why probability distributions are particularly inappropriate in the case of moral claims. (Additional elaborations/more considerations to follow.)

One distinction between factual claims and moral claims is that uncertainty about factual claims can be accompanied with evidence of the type that is particularly suggestive of probability judgments. For example, if I'm uncertain whether Joe is a liberal, evidence that Joe is an academic is useful evidence. Thus, we often describe the effect of that evidence in terms of conditional probabilities. If I know that 85% of academics are liberals, I have some real world, non-arbitrary basis for assigning the probability .85 that Joe is a liberal once I find out that he's an academic. But there's no similar basis for making such an assignment to moral claims, because there's no factual evidence for moral claims. If we were to describe conditional probabilities for moral claims, we would be very hard-pressed indeed to say what it was that we were conditioning on, and to point to something observable in the world that corresponded to same.

But without some observable evidence to condition on, it's hard to see why any given number is anything but a random stab. I would challenge anyone who assigns a numerical (or even some kind of merely ordinal "moral claim A is more likely than moral claim B," where A and B are not just mutually exclusive) probability assignment to a claim to give reasons why it isn't some other number. In particular, I'd challenge anyone who has a stated probability above .5 to explain the difference between .51 and .85 and .9999 and justify their specific choice.

I think Hal's suggestion on this one is extraordinarily clever -- perhaps we can condition on the strength of our moral sentiment/feeling/intuition. However, it still remains to be seen whether there's some ex post way of determining how right our moral judgment was after we observe the intuition, so as to have some basis for this probability assignment. I suppose we could observe our intuitions and judge how often they match up with rational arguments. But note that this procedure would equate having a rational argument to having certain knowledge of the truth of a proposition, and I can't see how two people, both of whom had rational arguments, could ever reconcile their opposing views of a moral claim via a Bayesian procedure. Nonetheless, I hasten to concede to Hal that if one's only reason for believing a moral claim is one's intuition, it might be reasonable to assign a probability to the correctness of one's intuitions generally.

Expand full comment

Eliezer, I didn't say probability is a "variable-in-the-world," nor did I talk about putting probability distributions over probabilities. And I don't see why we need an algorithm to "tell which world you are in"; sometimes you can't tell. You seem to be stuck on thinking of possibilities as "how you arrange the atoms in a solar system," but there are many other kinds of possibilities (which I'll post on soon). In particular I've been trying to call attention to "impossible possible worlds," which are usually inconsistent descriptions. Why not let us reason using those?

Expand full comment

Robin, you could write down a list of mutually incompatible moral propositions, such as "Killing people is bad" and "Killing people is good", and you could attach numbers to each proposition, like 0.2 and 0.3. The question is whether you can legitimately call these propositions "possible moral worlds", and the numbers "probabilities". I don't necessarily answer in the negative, but it's not a trivial problem.

Incompatible moral propositions don't directly correspond to different *worlds* because, no matter how you arrange the atoms in a solar system, that's not going to make it okay to kill people. Try to imagine a world that is exactly like this world - the same people, the same brain states, all the atoms in the same place, all fixed computations have the same outputs, all existing philosophers including you have written the same words in their essays - only in that world, killing people is good, instead of evil. If you call moral propositions worlds, then how do you tell which world you're in?

Similarly, probabilities themselves are not direct, ontological attributes of observed reality. If we don't know whether our shoelaces are tied, that is a fact about our state of mind, not a fact about the shoelaces. (Jaynes named this the Mind Projection Fallacy.) So, just because you feel a state of uncertainty about what "probability" you ought to assign to something, doesn't mean that "probability" is a variable-in-the-world. You could, nontrivially, state an ideal computation that you ought to be doing with the evidence you have, and then wonder about the unknown result of this fixed computation. For example, you might guess there's a 10% probability that this ideal computation would output an answer between "90%" and "95%" that your shoelaces are tied. But, in real life, either this computation outputs "92%" or it doesn't. And even if it is a definite fact that the computation outputs "92%", that doesn't mean there *really is* a 92% chance that your shoelaces are tied. Either they're tied or they're not. The actual world is not one in which the shoelaces have a little number 0.92 woven into the threads. The actual world is one in which a certain fixed computation has the definite output of 0.92, and your shoelaces are actually tied.

If you want to put probability distributions over probabilities, you've got to do some additional work to justify putting a probability distribution on something that isn't part of the world. You might, for example, introduce a fixed computation whose unknown answer you're putting a probability distribution over. But it is not permissible to just make the jump directly, based on the fact that you *feel unsure* about what probability you ought to assign. You can't trust your brain - in this case, as in so many others. Your brain has a tendency to collapse its own perceptual states into the objects that they refer to. We say, "The coin has a 50% probability of coming up heads," not "I assign a 50% probability to the coin coming up heads". Evolution saving a few clock ticks again.

Likewise it takes some extra work to build a framework for moral possible worlds. By treating moral propositions as possible worlds, you are essentially *formalizing* that case of the Mind Projection Fallacy, which may not be a wise course of action. It may lead you to forget that not all minds are in the same frame of reference that you happen to use, the way that all minds you meet will be in the same real world as you. If you try to treat with an entity that doesn't share your resolution procedure - Andromedan babyeaters, or a paperclip maximizer - then there's no single variable for the two of you to both be uncertain about.

Expand full comment

Paul and Eliezer, what exactly do you think goes wrong in describing moral uncertainty in terms of moral possibilities and a probability distribution over them? You must admit the formalism works just fine; it doesn't care. Why do you care? Nick S. similarly talked about probabilities only applying to "truth tracking" situations. You guys seem to think the problem is obvious, but I don't see it. Can you help?

Expand full comment

I'm with Robin on this one. On the action point, you might want to check out Ted Lockhart's book "Moral Uncertainty and its Consequences" which grapples with precisely the issue of how to make decisions when we are uncertain of the truth of competing moral claims - a matter which, incidentally, deserves a lot more attention than it gets.

Expand full comment

Paul, I share your worry about the problem; but I would not declare it unsolvable. In my view, it makes sense to construct probability distributions over moral judgments wherever we can uncover a *dependency* of that moral judgment on a question of fact.

Or in simpler language, "Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts. If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm." This view lets us talk sensible about probability distributions over "correct" emotions - by contagion of the probability distributions we have over correct facts. Since moral judgments, in my view, legitimately derive partially from emotions, by contagion of probability we have probabilities over moral judgments.

And of course many moral judgments depend more directly upon facts - for example, your $FavoritePoliticalSystem is probably supported, in your view, by your beliefs about the consequences of implementing $FavoritePoliticalSystem.

For bounded rationalists, questions of "fact" can include questions about the unknown results of known computations, as well as questions about environmental variables. So we can also talk about probability distributions over the limit of a logic perfectly applied - but it does have to be the same logic in all cases.

But I do agree with Paul to the extent of holding that the probabilistic status of a moral claim has to be, as it were, discovered - you can't take it for granted. You can't just apply Aumann's Agreement Theorem to a disagreement between two parties who "feel strongly that they're both right", just because the human brain happens to reuse the same subjective sensation of "rightness" for probability judgments and political arguments. You've got to show that the disagreement, or some important facet of the disagreement, could not persist if both parties possessed perfectly veridical views of the universe. (This is straightforward enough for most political arguments, and quite a few moral ones.)

It is worthwhile to distinguish between moral disagreements between humans, and moral disagreements between humans and aliens. It is reasonable to suppose shared human premises by reason of shared neural architecture. Putting a probability distribution over the differing goals of humans and Andromedan babyeaters would be much more problematic.

Expand full comment

My understanding of the interpretation of probabilities in cases like this is as follows. Ideally if I judge something to have a 35% probability, whether it is the morality of lying to a murderer or something prosaic like the chance that it will rain tomorrow, I mean that I expect to be right 35% of the time that I have that strong a feeling.

Expand full comment

I've never heard anything from proponents of it (although apparently Leland Yeager used to be one), but the only ethical theory that makes a lick of sense to me is emotivism. The subjective-objective or normative-positive distinction just seems like too huge a gap. So what's a good defense of this ethics & philosophy business that takes emotivism behind the shed and gives its hide a good tanning?

Expand full comment

Eli's concept of (individual) extrapolated volition would be one way to get a useful probability estimate. "If my IQ were bumped up 50 points, my self-serving bias was damped down, I spent 10 years thinking about the problem, etc, what would my moral opinion be?" Of course, this requires that you have certain meta-ethical beliefs and that your process of moral decision making not be a chaotic system, but it does let you translate a moral dilemma to a physical problem.

For the individual uncertain about the moral status of embryos (or insects, or fictional characters), one can try to maximize the expected satisfaction of your ideal self with the choice, e.g. encourage other forms of birth control, etc, and permit abortion where bodily integrity interests are strong.

Personally, I'm not certain what fidelity of simulation would be required to make VR entities' 'suffering' morally relevant, but I will avoid creating medium-fi Hells, even if this would be amusing to people who I am quite certain have moral standing.

Expand full comment

Paul, I submit that the only issue here is whether or not we can imagine "moral possibilities." Once you let me describe such possibilities, then you cannot escape letting me form probabilities distributions over them. And in fact I don't see any other way for you to express the idea that you have moral uncertainty.

I'll ask you as I asked Nick S., what do you think of the idea of "impossible possible worlds" as a way to represent reasoning about logical truths. If you'll grant those it seems a short step to moral possibilities.

Nick B. was thinking about moral possibilities last summer; perhaps he'll weigh in.

Expand full comment