23 Comments

Great post, truly!

Expand full comment

I'm a full-blown moral skeptic, so perhaps not the right audience, but I still have some questions. Assuming that there is such a thing as moral error and intuitions that give evidence about "true morality", it doesn't necessarily seem such a good idea to rely exclusively on one. Analogize our differing intuitions within our heads to different individuals: more precisely, experts as depicted by Tetlock. These experts are unreliable but the best we have. We think all of them are prone to error and overconfident in themselves. Wouldn't trying to pick "the best" expect and listening exclusively to him/her be a mistake? How can we trust our own ability to determine which expert is best? Shouldn't "the wisdom of crowds" help the random errors associated with only listening to single expert? If I recall correctly, call a friend gives worse results than asking the audience in Who Wants to Be a Millionaire.

Much of humanity for much of its historyHas been illiberal. You already reject the moral authority of most of human history.

Expand full comment

I appologize for the long and possibly-confusing post. My question can be summarized more concisely:

Do the consequentualist frameworks being discussed here take into account that preferences change, and that happiness and unhappiness change them?

Expand full comment

Jess, no argument here, though I think if you asked the average guy on the street: "is racism bad for humanity as a whole?" they'd say "yes". To me this kind of begs the question, why aren't there more consequentialist ethics? They only seem to be successful on intimate levels, where each actor knows the others very well.

The critiques used of Robin's ethics (by Will in the last post and by Caplan) seem almost silly to me. A consequentialist ethic needs to not only optimize utility in the present, but in the future as well. Humans learn from carrots and sticks, and so some people must be made unhappy in the present so that more people can be made happy in the future. Racist memes are bad for society, so racists need to be carrotted and sticked into becoming less racist.

Caplan's Nazi critique of Robin's ethics (during their debate) seemed silly for this reason. We wouldn't want to allow a billion Nazis to murder a small number of Jews even if this was a net increase in utility, because racist-nationalist memes are utility-decreasing in the long run. The same thing goes for Will's example of a black family moving into a neighborhood of racists: we don't care because we know racism needs to go the way of the dodo bird. It seems to me that most of our natural moral intuitions already take time and memes into account, at least to some extent.

I think The Watchmen said it best: Nothing ever ends.

I guess I'm going to read Robin's paper (which I skipped because I already believe health isn't special).

Expand full comment

Will, I don't think inconsistent actions are that big a problem for preference utilitarians; we can infer preferences from noisy actions. Inferring true preferences from noisy actions is much like inferring true morality from noisy intuitions; the larger the noise one expects, the simpler the model one should infer.

In saying "all else equal" I didn't mean to say anything about who objected to the preferences, nor did I mean to disallow others to evaluate those preferences. I don't understand how you can claim that "it doesn’t speak to what we want or why." People can want others to want things, and that all counts in getting people what they want. The main rule is to get folks what they want, no matter what that is.

Expand full comment

"Will, if you evaluate moral sentiments in terms of coordinating social behavior, you need some other way to evaluate what is a good vs. bad coordination outcome. I honestly don’t know a single moral intuition we have that is stronger than “all else equal, it is better for each person to get more of what they want.”

I agree that you need some way of determining the desirability of the coordination outcome.But what I was saying is that I don't think there is a truly authoritative standard that is not endogenous to the system of norms that produces the coordination outcome. If you simply posit something like preference utilitarianism, you'll find that it is not in fact acceptable from within the prevailing morality. People as they are won't find themselves with adequate reason to endorse it. What you'll have is a semi-arbitrary, intuition-based technique for suggesting radical revisions to the norms and beliefs that generate the coordination outcome. This is nice, because it feels like an archimedean lever. But it isn't. If you're worried that people have inconsistent moral intuitions, then you need to worry that they have inconsistent preferences, too, which obviously wreaks a kind of havoc on preference utilitarianism. (And if people don't have consistent preferences, it's probably because they don't want to, and they should get what they want, right?)

As to “all else equal, it is better for each person to get more of what they want,” it really matters how you specify what it means for all else to be equal. You can't mean "insofar as people have unobjectionable preferences," because that's cheating. And I think it would strikes most people as just bizarre to take preferences out of the realm of evaluation. Much of humanity for much of its history has thought that human nature is base, that our wants are despicably animal, and that we should only get more of what we want as long as what we want is sufficiently elevated or in accordance with divine or "natural" law.

Again, one of the reasons preference utiltarianism is so objectionable is that it doesn't speak to what we want or why. It's both too permissive and too conservative. That people with evil desires shouldn't satisfy them is more intuitively plausible than the idea that, other things equal, people should get more of what they want. And patterns of preference are themselves an outcome of social and economic structure and process. To endorse people getting more of whatever they happen to want is one way of being complacently mute about the determinants of preferences when we'd like to be able to criticize precisely those things.

Anyway, I like it that we're arguing because I agreed with you about Miller and Frank, but in the wrong way!

Expand full comment

Also, I think we meant two different things by 'accuracy'. In your paper, you mean 'tracking the truth'. I meant 'tracking our intuitions'. We want a theory to capture our intuitions, ceteris paribus, so in that sense accuracy is a theoretical virtue.

Expand full comment

I saw it before but I'm going to read through it again. I looked through the most abstract metaethical discussions at the beginning and towards the end. Let me ask you about one passage:

"Specifically, we can examine our evolved health-care intuitions with respect to the two common indicators of intuition-error that I discussed in Section III: excessive historical contingency of origin and hidden bias toward one’s self or one’s in-group."

Ok, this seems right to me, but why does it seem right to you? Is it that these intuitions fail to cohere with our intuitions that ethical truths are universal? Or our intuition that the best indicator of moral truth are those judgments we make when taking an impartial perspective?

If so, then I can make sense of your error-detection method in terms of weight and coherence. We have weighty intuitions about impartiality (which itself is neutral between utilitarianism and deontology) and the universality of moral judgments (which cuts across most moral theories) and we want to choose moral principles in harmony with these weighty intuitions. So then why think that simplicity by itself is the best guide to accuracy? Why not think that the best guide to accuracy is a combination of different theoretical virtues? In fact, isn't that the view you really take in your paper? (Despite my not having carefully reread it!)

So, your are more modest in the paper: "This is not, however, the onlypossible response. If some but not all of our moral intuitions come undera cloud of suspicion, we can simply rely more heavily on our otherintuitions. In other words, if moral intuitions taken from contexts outsidethis cloud of suspicion are presumed to have smaller errors, then we can seek moral principles that primarily fit our data in those contexts andapply those principles to health care."

You go on to say: "This might, but need not, tip the balance of reflective equilibrium so much that we adopt very simple and general moral principles, such as utilitarianism. This might not be appealing, but if we really distrust some broad set of our moral intuitions, this may be the best that we can do."

So you suggest that we weight simplicity more highly than many intuitionists do. That's cool. But we can still assign weight to other metrics. And there are also other important deep, abstract and simple intuitions, like the 'separateness of persons' intuition? That seems pretty darn universal and weighty (and simple) and intuition to me. It is also, I think, incompatible with utilitarianism, as is the Bernard Williams integrity intuition that morality cannot demand as much of us as utilitarianism does.

Expand full comment

Kevin, as I explained in my paper, the larger the errors in our intuitions, the more one must go for simplicity in order to achieve accuracy.

Expand full comment

But why is the fact that it applies to only a tiny fraction of situations relevant? Why not accept a large number of heavily weighted intuitions rather than a single, abstract one that many reject? It will all depend on what you prize most in a theory: simplicity or accuracy (or even something else).

Expand full comment

Kevin, the "don't kill" intuition only applies to a tiny fraction of situations; if you wanted a framework that always said what to do you'd need an enormous number of intuitions which were that specific. In contrast, giving people what they want applies to pretty much any situation, so with that you can always get an answer with only that one intuition.

Expand full comment

In other words, there are many theoretical virtues of a moral theory and different people will weight them differently. David Schmidtz thinks capturing the particular intuitions is so important that he is willing to give up on the idea of a monistic moral theory altogether. What's wrong with his weighting of the meta-moral-theoretical criteria? It seems to me that you've got a bias against the other metrics, prizing using the least number of intuitions over all other considerations.

Expand full comment

Robin, why is your only metric for choosing amongst intuitions upon which to base a moral theory their number? Here are some other metrics: weight, coherence with other intuitions, costs of revision.

If you go with weight, then lots of particular intuitions seem to come into play much more so than abstract intuitions about principles: 'don't kill' or 'don't rape' gets more weight than many abstract intuitions about moral theories.

If you go with coherence, then your view runs into trouble because it conflicts with lots of other intuitions.

And consider costs of revision - your view requires us to overturn vast swaths of other intuitions that we hold, whereas I find it basically costless to my intuitions to reject your view. Certainly you possess the intuitions that you claim not to trust, so why consider it a theoretical cost to have to abandon those intuitions?

Expand full comment

Will, if you evaluate moral sentiments in terms of coordinating social behavior, you need some other way to evaluate what is a good vs. bad coordination outcome. I honestly don't know a single moral intuition we have that is stronger than "all else equal, it is better for each person to get more of what they want." If one relies only on this intuition, one gets preference utilitarianism. All other moral frameworks I know of seem to default to relying on this intuition when other considerations are not relevant. So each of them relies on this intuition, plus other intuitions. If so, the framework that relies on the fewest intuitions is preference utilitarianism. If you know of some comparably minimal framework, I'm all ears.

Expand full comment

Robin,

I think we can clear this debate up a bit if we distinguish between two understandings of utility - a formal notion and a substantive notion. Take the 'formal' notion to be broadly Misesian, or merely as a formal representation of preferences. Take the 'substantive' notion to be some positive account of well-being - say, for instance, an informed-desire account, or a hedonistic account (with a specified conception of hedons) or a perfectionist account. I imagine you incline towards some sort of view between informed-desire and hedonism.

Once we make this distinction, I think we can see why you aren't really defending a view that is more 'acceptable' in any sense of the word. If you are advocating the 'formal' preference utilitarianism, then your view is compatible with literally any moral theory. In Theory of Justice, Rawls notes that if you read the principle of utility formally enough, then his theory counts as a version of utilitarianism. I think economists often implicitly work with this formal standard. I think sometimes you're working with this view as well.

But take a guy like me - I'm a deontologist - I think sometimes we have reasons to act that are not tied to promoting well-being or promoting anything for that matter. However, I could be a utilitarian on this view. Call me a 'reasons-Paretian' - I favor promoting those states of affairs where we have more acting on our deontic reasons than not.

So I don't think you mean to defend the 'formal' theory. However, once you embrace the 'substantive' conception of utility, it should become immediately clear that your utilitarianism is going to be no more 'acceptable' than any other moral theory, not to mention more than other versions of utilitarianism. Why is your version of utilitarianism more acceptable, than say, G.E. Moore's ideal utilitarianism?

Expand full comment

Robin, I know all the work on the unreliability intuition. And I'm incredibly skeptical of the standard intuitionist methods of moral philosophy as well. But I'm equally skeptical of arguments like Josh Greene's that try to infer utiliatrianism from the incoherence of our intuitions. It's a total non-sequitur. Lots of moral philosophers who trust intuition less tend toward a Humean thinking about how contingent moral sentiments (intuitions) do or do not succeed in coordinating social behavior. It would be nice to have an exogenous criterion for determining whether a scheme of coordination was a good one. And you can go ahead and say you're gonna go with preference utilitarianism for this purpose. But to my mind that has all the virtue of theft over honest toil.

If you've never read Sigdwick's The Methods of Ethics, I recommend it. He's as clear as anyone has ever been that there is no non-intuitionist basis for the principle of utility. You might think that a single fundamental intuition is better than lots of less fundamental ones, especially if it lends structure and rigor to moral reasoning. But if the foundational intuition is wrong, then all the inferences based on it are wrong or only accidentally right.

Expand full comment