Reply to Wilkinson

Responding to Will’s comments, I wrote:

Will Wilkinson seems to me a bit too quick here to assume the activities he likes are less deserving of taxes. …  If we are to tax positional or unhappy activities, then let’s do that consistently, following our best data on positionality or happiness.

Will replied:

First, I think Robin may have missed one of my key points, which is that “negative externality” is not a synonym for “harm” in the relevant sense of the word. It begs the question to just go ahead and talk about various harms as if I had not just argued that they don’t all count as harms just because someone is bothered by each of them. …

There is no clear theoretical basis for selecting a single, clear theoretical basis for determining what does and does not count as a harm. Indeed, no one is rationally bound to accept the normative assumptions underlying the case for economic competition–the clear theoretical basis for “harm” Robin is willing to accept. …

Moral diversity and disagreement are ineradicable. … I think Robin complains that I share Miller’s and Frank’s reliance on intuitions about things we happen to dislike because I’m arguing with them from within what I see to be their prior liberal moral commitments, which I share. We’re all liberals, which means we dislike many of the same things.

Will is such a pleasure to converse with that I didn’t notice how differently we use words.  Like most economists, I do count anything that bothers anyone as a “harm,” and anything that benefits anyone as a good.  (The same act can be both.)  To decide which acts should be taxed or subsidized, I use the usual economists’ efficiency criteria to rank policies.  Call me morally naive, but this seems a good guide to me.

Given these choices it becomes a matter of fact whether taxing any given activity increases or decreases efficiency, and disagreement should be eradicable.  In the absence of substantial market failures it is clear that ordinary competition is favored.  What I meant when asking for “a clear principle we are willing to apply consistently” is a way to see through the mass of detail to discern the efficient policies in the other subtler cases.

I get that you can offer quicker stronger arguments to your fellow liberals by referring to your shared assumptions with them.  But I seek more widely acceptable arguments.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://willwilkinson.net/flybottle Will Wilkinson

    “I get that you can offer quicker stronger arguments to your fellow liberals by referring to your shared assumptions with them. But I seek more widely acceptable arguments.”

    Do you mean acceptable or accepted? If the latter, I don’t think that the standard economic conception of efficiency is really more widely accepted than the set of liberal values I mentioned, which are pretty broadly shared, especially among academics.

    If you mean acceptable, then I ask according to what standard? The problems with standard efficiency criteria are well-known. Pareto is basically without meaningful content when you allow interdependent preferences and dynamic endogenous preference formation. And Kaldor-Hicks is just utilitarianism, more or less. I know you don’t want to wrestle in the mud with moral philosophers, but you can’t keep up your very normatively-focused enterprise by simply gesturing to “the usual economists’ criteria to rank policies” when basically all experts on this stuff reject the standard efficiency criteria.

    I understand you’ve been trained and socialized as an economist, but that’s not good enough, is it? If you want your complaints about others leaning too hard on their intuitions to have bite, you need to assure us that you’re not just leaning on other intuitions you happen to like better.

  • http://hanson.gmu.edu Robin Hanson

    Will, I disagree that “basically all experts on this stuff reject the standard efficiency criteria.” I know that many moral philosophers reject it, but mainly because they can think of cases where it conflicts with their moral intuitions. I think they trust such intuitions much too strongly, however. Yes, the version of efficiency I favor is close to preference utilitarianism, and many moral philosophers who trust specific moral intuitions less favor this. My one philosophy publication is exactly about how trusting specific intuitions less leads to simpler accounts like preference utilitarianism.

  • Kenny Evitt

    So, if liberal academics disregard efficiency, then accepted/acceptable counter-arguments to their prescriptions for preventing specific harms must be of the form “… but you’ll violate the pluralism/liberalism/… of society!”.
    The basis of their (recent) arguments are that specific goods are positional due to intense signaling and that positional goods make people unhappy [and are therefore inefficient uses of limited resources!]! They’re just begging for a good “well, if you want to talk about efficiency …” argument to remind them that the general heuristic guiding their intuitions will consistently deprive them of … a society that is “tolerant of dissent, peaceful cultural conflict, and social change”.

  • loqi

    Speaking as someone who’s pretty sure his moral intuitions do not align with Hanson’s, I’m completely unmoved by his opponents on this issue (Caplan, and now Wilkinson). I want economists and policy analysts making as few hidden background assumptions as possible, especially when morality is involved.

  • Grant

    I’m surprised no one has mentioned the benefits of unhappiness or disutility. People feel pain for good reasons. Sometimes pain helps them avoid more pain (or gain more happiness) in the future. Sometimes it helps them reproduce. Both pain and happiness shape our memes, though I admit it would be nice to have the later without the former.

    For example, the unhappiness felt by racists who have a black man move into their neighborhood is disregarded by the vast majority of people, who believe racist memes are bad for society as a whole. Mightn’t the racists’ disutility be necissary to break down those memes? If we pay them for being racist, will that reduce the number of racists? (in some ways I think this is an anti-Coasian argument)

    While I generally agree with Robin, I’m very skeptical that sociologists have enough understanding of what makes our society tick to trust them with the power to tax anything as nebulous as “unhappiness”.

  • Jess Riedel

    …the vast majority of people…believe racist memes are bad for society as a whole.

    Grant, a nitpick: the claim that the wrongness of racism derives from the fact that it is bad for society is decidedly consequentialist. The vast majority of people behave roughly according to a deontological morality, at least on the shallow, operational level at which most people address moral issues.

  • Zac Gochenour

    The paper Robin links to above is one from his CV that I had skipped. Neither the title nor the publication outlet really screamed “must read” to me. You may be tempted to do the same right now, especially if you are skeptical of authors who cite / link to their own papers. Or if you are skeptical of philosophy written by economists.

    I read the paper last night, and I’m paying for it today since I barely slept. Yes it is that good- genuine viewquake material.

  • http://willwilkinson.net/flybottle Will Wilkinson

    Robin, I know all the work on the unreliability intuition. And I’m incredibly skeptical of the standard intuitionist methods of moral philosophy as well. But I’m equally skeptical of arguments like Josh Greene’s that try to infer utiliatrianism from the incoherence of our intuitions. It’s a total non-sequitur. Lots of moral philosophers who trust intuition less tend toward a Humean thinking about how contingent moral sentiments (intuitions) do or do not succeed in coordinating social behavior. It would be nice to have an exogenous criterion for determining whether a scheme of coordination was a good one. And you can go ahead and say you’re gonna go with preference utilitarianism for this purpose. But to my mind that has all the virtue of theft over honest toil.

    If you’ve never read Sigdwick’s The Methods of Ethics, I recommend it. He’s as clear as anyone has ever been that there is no non-intuitionist basis for the principle of utility. You might think that a single fundamental intuition is better than lots of less fundamental ones, especially if it lends structure and rigor to moral reasoning. But if the foundational intuition is wrong, then all the inferences based on it are wrong or only accidentally right.

  • Kevin

    Robin,

    I think we can clear this debate up a bit if we distinguish between two understandings of utility – a formal notion and a substantive notion. Take the ‘formal’ notion to be broadly Misesian, or merely as a formal representation of preferences. Take the ‘substantive’ notion to be some positive account of well-being – say, for instance, an informed-desire account, or a hedonistic account (with a specified conception of hedons) or a perfectionist account. I imagine you incline towards some sort of view between informed-desire and hedonism.

    Once we make this distinction, I think we can see why you aren’t really defending a view that is more ‘acceptable’ in any sense of the word. If you are advocating the ‘formal’ preference utilitarianism, then your view is compatible with literally any moral theory. In Theory of Justice, Rawls notes that if you read the principle of utility formally enough, then his theory counts as a version of utilitarianism. I think economists often implicitly work with this formal standard. I think sometimes you’re working with this view as well.

    But take a guy like me – I’m a deontologist – I think sometimes we have reasons to act that are not tied to promoting well-being or promoting anything for that matter. However, I could be a utilitarian on this view. Call me a ‘reasons-Paretian’ – I favor promoting those states of affairs where we have more acting on our deontic reasons than not.

    So I don’t think you mean to defend the ‘formal’ theory. However, once you embrace the ‘substantive’ conception of utility, it should become immediately clear that your utilitarianism is going to be no more ‘acceptable’ than any other moral theory, not to mention more than other versions of utilitarianism. Why is your version of utilitarianism more acceptable, than say, G.E. Moore’s ideal utilitarianism?

  • http://hanson.gmu.edu Robin Hanson

    Will, if you evaluate moral sentiments in terms of coordinating social behavior, you need some other way to evaluate what is a good vs. bad coordination outcome. I honestly don’t know a single moral intuition we have that is stronger than “all else equal, it is better for each person to get more of what they want.” If one relies only on this intuition, one gets preference utilitarianism. All other moral frameworks I know of seem to default to relying on this intuition when other considerations are not relevant. So each of them relies on this intuition, plus other intuitions. If so, the framework that relies on the fewest intuitions is preference utilitarianism. If you know of some comparably minimal framework, I’m all ears.

  • Kevin

    Robin, why is your only metric for choosing amongst intuitions upon which to base a moral theory their number? Here are some other metrics: weight, coherence with other intuitions, costs of revision.

    If you go with weight, then lots of particular intuitions seem to come into play much more so than abstract intuitions about principles: ‘don’t kill’ or ‘don’t rape’ gets more weight than many abstract intuitions about moral theories.

    If you go with coherence, then your view runs into trouble because it conflicts with lots of other intuitions.

    And consider costs of revision – your view requires us to overturn vast swaths of other intuitions that we hold, whereas I find it basically costless to my intuitions to reject your view. Certainly you possess the intuitions that you claim not to trust, so why consider it a theoretical cost to have to abandon those intuitions?

  • Kevin

    In other words, there are many theoretical virtues of a moral theory and different people will weight them differently. David Schmidtz thinks capturing the particular intuitions is so important that he is willing to give up on the idea of a monistic moral theory altogether. What’s wrong with his weighting of the meta-moral-theoretical criteria? It seems to me that you’ve got a bias against the other metrics, prizing using the least number of intuitions over all other considerations.

  • http://hanson.gmu.edu Robin Hanson

    Kevin, the “don’t kill” intuition only applies to a tiny fraction of situations; if you wanted a framework that always said what to do you’d need an enormous number of intuitions which were that specific. In contrast, giving people what they want applies to pretty much any situation, so with that you can always get an answer with only that one intuition.

  • Kevin

    But why is the fact that it applies to only a tiny fraction of situations relevant? Why not accept a large number of heavily weighted intuitions rather than a single, abstract one that many reject? It will all depend on what you prize most in a theory: simplicity or accuracy (or even something else).

  • http://hanson.gmu.edu Robin Hanson

    Kevin, as I explained in my paper, the larger the errors in our intuitions, the more one must go for simplicity in order to achieve accuracy.

  • Kevin

    I saw it before but I’m going to read through it again. I looked through the most abstract metaethical discussions at the beginning and towards the end. Let me ask you about one passage:

    “Specifically, we can examine our evolved health-care intuitions with respect to the two common indicators of intuition-error that I discussed in Section III: excessive historical contingency of origin and hidden bias toward one’s self or one’s in-group.”

    Ok, this seems right to me, but why does it seem right to you? Is it that these intuitions fail to cohere with our intuitions that ethical truths are universal? Or our intuition that the best indicator of moral truth are those judgments we make when taking an impartial perspective?

    If so, then I can make sense of your error-detection method in terms of weight and coherence. We have weighty intuitions about impartiality (which itself is neutral between utilitarianism and deontology) and the universality of moral judgments (which cuts across most moral theories) and we want to choose moral principles in harmony with these weighty intuitions. So then why think that simplicity by itself is the best guide to accuracy? Why not think that the best guide to accuracy is a combination of different theoretical virtues? In fact, isn’t that the view you really take in your paper? (Despite my not having carefully reread it!)

    So, your are more modest in the paper: “This is not, however, the only
    possible response. If some but not all of our moral intuitions come under
    a cloud of suspicion, we can simply rely more heavily on our other
    intuitions. In other words, if moral intuitions taken from contexts outside
    this cloud of suspicion are presumed to have smaller errors, then we can seek moral principles that primarily fit our data in those contexts and
    apply those principles to health care.”

    You go on to say: “This might, but need not, tip the balance of reflective equilibrium so much that we adopt very simple and general moral principles, such as utilitarianism. This might not be appealing, but if we really distrust some broad set of our moral intuitions, this may be the best that we can do.”

    So you suggest that we weight simplicity more highly than many intuitionists do. That’s cool. But we can still assign weight to other metrics. And there are also other important deep, abstract and simple intuitions, like the ‘separateness of persons’ intuition? That seems pretty darn universal and weighty (and simple) and intuition to me. It is also, I think, incompatible with utilitarianism, as is the Bernard Williams integrity intuition that morality cannot demand as much of us as utilitarianism does.

  • Kevin

    Also, I think we meant two different things by ‘accuracy’. In your paper, you mean ‘tracking the truth’. I meant ‘tracking our intuitions’. We want a theory to capture our intuitions, ceteris paribus, so in that sense accuracy is a theoretical virtue.

  • http://willwilkinson.net/flybottle Will Wilkinson

    “Will, if you evaluate moral sentiments in terms of coordinating social behavior, you need some other way to evaluate what is a good vs. bad coordination outcome. I honestly don’t know a single moral intuition we have that is stronger than “all else equal, it is better for each person to get more of what they want.”

    I agree that you need some way of determining the desirability of the coordination outcome.But what I was saying is that I don’t think there is a truly authoritative standard that is not endogenous to the system of norms that produces the coordination outcome. If you simply posit something like preference utilitarianism, you’ll find that it is not in fact acceptable from within the prevailing morality. People as they are won’t find themselves with adequate reason to endorse it. What you’ll have is a semi-arbitrary, intuition-based technique for suggesting radical revisions to the norms and beliefs that generate the coordination outcome. This is nice, because it feels like an archimedean lever. But it isn’t. If you’re worried that people have inconsistent moral intuitions, then you need to worry that they have inconsistent preferences, too, which obviously wreaks a kind of havoc on preference utilitarianism. (And if people don’t have consistent preferences, it’s probably because they don’t want to, and they should get what they want, right?)

    As to “all else equal, it is better for each person to get more of what they want,” it really matters how you specify what it means for all else to be equal. You can’t mean “insofar as people have unobjectionable preferences,” because that’s cheating. And I think it would strikes most people as just bizarre to take preferences out of the realm of evaluation. Much of humanity for much of its history has thought that human nature is base, that our wants are despicably animal, and that we should only get more of what we want as long as what we want is sufficiently elevated or in accordance with divine or “natural” law.

    Again, one of the reasons preference utiltarianism is so objectionable is that it doesn’t speak to what we want or why. It’s both too permissive and too conservative. That people with evil desires shouldn’t satisfy them is more intuitively plausible than the idea that, other things equal, people should get more of what they want. And patterns of preference are themselves an outcome of social and economic structure and process. To endorse people getting more of whatever they happen to want is one way of being complacently mute about the determinants of preferences when we’d like to be able to criticize precisely those things.

    Anyway, I like it that we’re arguing because I agreed with you about Miller and Frank, but in the wrong way!

  • http://hanson.gmu.edu Robin Hanson

    Will, I don’t think inconsistent actions are that big a problem for preference utilitarians; we can infer preferences from noisy actions. Inferring true preferences from noisy actions is much like inferring true morality from noisy intuitions; the larger the noise one expects, the simpler the model one should infer.

    In saying “all else equal” I didn’t mean to say anything about who objected to the preferences, nor did I mean to disallow others to evaluate those preferences. I don’t understand how you can claim that “it doesn’t speak to what we want or why.” People can want others to want things, and that all counts in getting people what they want. The main rule is to get folks what they want, no matter what that is.

  • Grant

    Jess, no argument here, though I think if you asked the average guy on the street: “is racism bad for humanity as a whole?” they’d say “yes”. To me this kind of begs the question, why aren’t there more consequentialist ethics? They only seem to be successful on intimate levels, where each actor knows the others very well.

    The critiques used of Robin’s ethics (by Will in the last post and by Caplan) seem almost silly to me. A consequentialist ethic needs to not only optimize utility in the present, but in the future as well. Humans learn from carrots and sticks, and so some people must be made unhappy in the present so that more people can be made happy in the future. Racist memes are bad for society, so racists need to be carrotted and sticked into becoming less racist.

    Caplan’s Nazi critique of Robin’s ethics (during their debate) seemed silly for this reason. We wouldn’t want to allow a billion Nazis to murder a small number of Jews even if this was a net increase in utility, because racist-nationalist memes are utility-decreasing in the long run. The same thing goes for Will’s example of a black family moving into a neighborhood of racists: we don’t care because we know racism needs to go the way of the dodo bird. It seems to me that most of our natural moral intuitions already take time and memes into account, at least to some extent.

    I think The Watchmen said it best: Nothing ever ends.

    I guess I’m going to read Robin’s paper (which I skipped because I already believe health isn’t special).

  • Grant

    I appologize for the long and possibly-confusing post. My question can be summarized more concisely:

    Do the consequentualist frameworks being discussed here take into account that preferences change, and that happiness and unhappiness change them?

  • http://entitledtoanopinion.wordpress.com TGGP

    I’m a full-blown moral skeptic, so perhaps not the right audience, but I still have some questions. Assuming that there is such a thing as moral error and intuitions that give evidence about “true morality”, it doesn’t necessarily seem such a good idea to rely exclusively on one. Analogize our differing intuitions within our heads to different individuals: more precisely, experts as depicted by Tetlock. These experts are unreliable but the best we have. We think all of them are prone to error and overconfident in themselves. Wouldn’t trying to pick “the best” expect and listening exclusively to him/her be a mistake? How can we trust our own ability to determine which expert is best? Shouldn’t “the wisdom of crowds” help the random errors associated with only listening to single expert? If I recall correctly, call a friend gives worse results than asking the audience in Who Wants to Be a Millionaire.

    Much of humanity for much of its history
    Has been illiberal. You already reject the moral authority of most of human history.

  • Pingback: Overcoming Bias : Minimal Morals

  • Pingback: Overcoming Bias : Virtues of Policy Delay

  • http://en.wikipedia.org/ Shirley

    Great post, truly!