Morality Is Overrated

Hanging out with moral philosophers last week at Oxford reminded me of the old complaint that economists neglect morality.  Actually, I think the real problem is the reverse!  Let me explain.

Many people advise us on what to do.  Some discuss personal actions, while others suggest how groups could better coordinate.  And, crucially, some advise us on what we should do, while others advise us on how to get what we want

At the personal level, parents, teachers, preachers, and activists tend to tell us what is morally right, while friends, mentors, lawyers, doctors, therapists, and financial planners tend to tell us what will achieve our ends.  At the level of social policy, pundits and wonks give a mixture of rationales for their suggestions.  Moral philosophers, for example, tend to emphasize policies we should pick, while economists tend to emphasize policies to better get us what we want.   

All else equal, we may each prefer to do what is right, but when all else is not equal we often allow other considerations to weigh against morality.  After all, morality is only one of the many ends we pursue.  Yes we want to be moral, but we also want other things, and we each choose as if we often care about those other things more than morality.  (Some say moral beliefs directly cause us to be moral even if we don’t want that, but I prefer to describe this as a revealed preference for moral ends, i.e., for "wanting" to be moral.) 

Economic analysis tries to infer what people want, largely from actions, and then tries to suggest policies to get people more of what they want.  (In particular, it suggests good deals – policy packages which should be better for most everyone.)  Yes people often make mistakes, are ignorant, and have conflicts with the wants of others, but economists have many reasonable fixes for such problems.  Critics, however, say economic analysis is untrustworthy because it is incomplete, since wants are only one of many moral considerations.  But this complaint seems to me backwards.

Yes, we "should" (morally) prefer to analyze policy in moral terms, and we should choose what moral analysis recommends.  For example, perhaps we should immediately and drastically cut CO2 emissions because we have no right to pollute natural purity, and we should care greatly about wildlife and distant future generations.  If so, economic analyzes advising only modest CO2 taxes are a moral travesty, reflecting an unconscionable neglect of "non-economic" considerations.  We should thus condemn these analyzes and the economists who support them. 

But in fact we care only moderately about what we "should" do.  We do not want immediate drastic CO2 cuts because we do not in fact care much about natural purity, wildlife, or distant generations, even if we should care more. Economic analyzes suggest modest CO2 taxes not because they ignore "non-economic" considerations – there are no such things – but because such analyzes give morality only as much weight as people do.   

What we humans want is policy that considers our wants overall, without giving excess weight to morality.  So we want policy advisors, like economists, who suggest actions that better get us what we want, even if those actions are immoral.  We do not want to just do what we should, but we instead want to achieve all our ends, including immoral and amoral ends.  So we mostly do not want to just do what moral philosophers suggest. 

Unfortunately, all this is clouded by our tendency to want to appear to care more about morality than we actually do.  We want to take the moral high ground and be seen as supporting highly moral policies, even if we don’t actually want those policies implemented.  So we publicly support moral policies when our support seems unlikely to change the outcome.  But it is amoral advisors, like economists, who help us the most. 

Bottom line: We want to get what we want, not just do what we should, and so we want advisors like economists who tell us how to get what we want.  But we’d rather be seen as following advisors like moral philosophers who tell us to do what we should.

Thanks to Nicholas Shackel for stimulating discussions on this. 

GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • http://www.hopeanon.typepad.com Hopefully Anonymous

    A dreary and obvious point for your core readers, although pretty well-stated. I’d like more exploration of the politically incorrect policy recommendations it leads to, such as encouraging an emergency room doctor to engage in random murder if the enjoyment of it allows them to be more productive at saving lives.

  • http://www.ciphergoth.org/ Paul Crowley

    Hopefully Anonymous: if you want to show that a consequentialist morality leads to unpalatable decisions, you have to actually demonstrate that there would be some reason for a consequentialist to suppose that the decision would actually have the desirable consequences you set out and that these consequences would outweigh the obvious undesirable consequences. And — here is the tricky bit — you have to do that without your audience starting to think that maybe that’s not so unpalatable after all.

    In this instance I’m pretty unconvinced that the policy you propose would have good consequences, to say the least.

  • a. y. mous

    So, if we just want to want what we should want, does to problem go away or will it be merely explained away?

  • athmwiji

    Most economists seem to be moral, suggesting that people should respect each other’s property and compete fairly. I think very few economist would advise someone to get rich through conquest and oppression by force of violence.

  • http://transhumangoodness.blogspot.com Roko

    “We want to get what we want, not just do what we should, and so we want advisors like economists who tell us how to get what we want. But we’d rather be seen as following advisors like moral philosophers who tell us to do what we should.”

    – I think you’re under-rating the importance of ethics here, and also slightly misinterpreting what it actually is. You talk about ethics as if it’s just another preference that humans have, or even as if it’s a commodity, but I think that this is not the case. If people preferred to act morally, then we wouldn’t need to talk about “ethics” or “morality”.

    The point is that a lot of the time, people like to mess things up for other people, for example by killing or hurting them. Ethics is a mutually-enforced contract within a society which makes people behave in a way that they don’t actually want to. Criminals would love to be able to get away with robbing people’s houses – they have a preference for getting money without doing any work – but using our codes of ethics, and the enforcement instruments thereof, we stop them.

    Ethics almost always boils down to the need to resolve some “tradgedy of the commons” scenario. In the case of the robbers, it would be better for the individual robber to rob people’s houses, but it’s worse for everyone if everyone does it.

    Should we provide criminals with expert advisors who will help them to more effectively commit crimes? No. Because what they want is not for the common good. In fact, we should probably outlaw such advisors.

  • a. y. mous

    >> No. Because what they want is not for the common good.

    Roko, the common good is the correct choice, “by definition”. Right? Forgive me. You are right. That was a cheap rhetorical retort at the OP.

    I’ve been reading posts here at OB for a while now. Have begun commenting only recently. Over-arching and over-reaching reductionist analyses makes me want to clarify then and there that it ain’t so.

  • http://barrkel.blogspot.com/ Barry Kelly

    I’m afraid you lost me a bit on this one. Economics is the study of choices and how individuals maximize utility and firms profits using those choices. It doesn’t have a moral dimension at all.

    Even if an economic model which has a wide range of choices for the individual leads to outcomes that a moral philosopher might object to, that doesn’t amount to a moral consequence or conclusion of economics: it’s a consequence or observed morality of individual choices.

    To achieve a desired moral outcome, an economist might recommend limiting certain choices, adjusting incentives and otherwise tinkering with the rules of the game and the information transmitted by price, but that morality is imposed from without.

    In the case of environmental harmful effects, it’s pretty basic economics that the costs to the group are not generally borne by the instigator of the effects, and that’s why the harmful effects continue. You can’t draw a conclusion that because the market resulted in harm to the environment, that represents a moral choice of a collective body of people, even if it was caused by the individuals making up the body. Instead, it comes down to basic game theory and the tragedy of the commons – which, incidentally, only occurs in large impersonal markets, not in the original villages and farming communities where people had a small enough circle to remember and punish infractions against the group.

    Thus, for the specific example of environmental harm, one needs an external value judgement to e.g. implement Pigouvian taxes to internalize these external moral costs into the pricing information.

  • http://hanson.gmu.edu Robin Hanson

    Hopefully and Paul, this is not an apology for consequential ethics – it is an apology for ignoring ethics and just looking for deals that get people what they want.

    a y, if you have conflicting wants, such as wanting to want something other than you want, we may prefer to observe your behavior to see which wants are stronger.

    Roko, moral philosophers will tell you morality is far more than just a way we coordinate to punish people who prevent us from getting what we want. People trying to get what they want can want to punish criminals without reference to morality.

    Barry, the issue is the size of the tax, not whether there should be a tax. The tax to correct harms to wants seems much smaller than the tax to avoid immoral outcomes.

  • http://barrkel.blogspot.com/ Barry Kelly

    Re “issue is the size of the tax”: the rules of the game are pretty simple here too, IMHO. What do you mean by “correct harms” vs. “avoid immoral outcomes”? If by this dichotomy you mean minimal and maximal moral shaping of behaviour, then I think you’re ignoring the obvious: a maximal moral shaping is inefficient because it incurs other moral costs from other sources. For example, there may be a moral cost to impinging liberty. Having a maximal tax to “avoid immoral outcomes” in the specific context of e.g. CO2 would have other costs in terms of liberty. Optimizing in the context of *all* the moral costs (which, to reiterate, are external to economics) should tend towards minimal shaping, *assuming* e.g. the Western individualist approach to morality.

  • http://shagbark.livejournal.com Phil Goetz

    You’re saying that “morality” is a way of getting things that you “should” want; “economics” is how to get things you actually want.

    Let’s eliminate the distinction between what you want and what you “should” want. At root, we do want a clean environment; we just want other people to want it more than we do. The term “should” indicates a collective want reified and glorified to sucker other people into pursuing it more vigorously than is rational.

    Here are the definitions that I think underlie Robin’s post. (I’m using these terms in an artificial way, for the purpose of discussion, so don’t thing that the things I say about “moral values” here are things that I believe about “moral values” as we usually use the term.)

    Values: What you/society want. Food, happiness, security, etc. Evolved into people.

    Economics: How society can achieve its values.

    Moral values: A second layer of values that encourages people to act in ways that tend to attain society’s primary values. Courage, honesty, sociability, etc. Also evolved into people, though not as consistently. It is easier to defect from moral values than from values.

    Morality: A set of moral values, bundled with heuristics for how attain them for people who don’t understand economics.

    Robin is additionally defining morality as ignorant of economics. This is what we observe, but it’s a historical accident, due to the fact that we live in the time when basic economic principles have been developed, but not long ago enough to have been incorporated into mainstream moralities.

    I think the distinction between morality and economics is not one of rationality. I would rather define morality as: How society can achieve its moral values.

    A modern, well-informed morality would insist on using economics to achieve its moral values. Our moralities do gradually improve as we gain economic understanding. For instance, early Christianity was radically communist. It shed this gradually over the centuries, eventually allowing ownership of property, charging of interest, enforcement of debts, and respect for wealth.

    Many people would disagree, and say that morality means doing “the right thing” yourself, without regard for what other people will do, because you’re only responsible for your own behavior. E.g., give money to the homeless person on the street even if you know they’re going to use it to get drunk, because you have fulfilled your moral duty, and it’s not your fault what she does next. This is a greedy morality, in 2 senses of the word: It’s algorithmically greedy because it tries to jump directly to attaining your moral values without having to think; and it’s economically greedy, because the goal of such a morality is not to attain society’s values, but only to attain your own personal salvation.

    Anyway. If there is a distinction between economics and morality, I would say that it is one of these:

    1. Economics teaches us how to attain values; morality teaches us how to attain moral values.

    2. Everyone desires values to roughly the same degree, whereas the desire people have for moral values varies greatly. Also, moral values differ from values in that it is generally advantageous to you to pursue values, and to have other people pursue moral values. This means that morality has a game-theoretic dimension beyond what economics does. When one agent considers what public morality to support, he has a trade-off between the cost to him of behaving in a way that appears to adhere to that morality, and the benefit to him from other people – who differ from him in the enjoyment they get from moral values, and in the degree to which they can be suckered into sticking to those values beyond what is rational – following that morality. When people set forth moralities in public, the equilibrium morality is the one that, over the population as a whole, may attain optimal returns; the trick is that some people are receiving their payoff in good feelings and self-satisfaction, or expected payoff in the next world.

    We thus predict that the more irrational the population is, and the larger the percentage of defectors is, the stricter the equilibrium morality will be.

    This suggests that morality as we know it may be /possible/ only in the presence of a large number of defectors. If everyone were honest, there would be no advantage in promulgating social standards stricter than you want to live up to yourself.

    Postscript: When putting forth a morality, the game-theoretic, rational behavior may be to deny the validity of game-theoretic, rational behavior.

  • conchis

    As an argument that economists shouldn’t take morality into account this seems hopelessly self-defeating. You seem to be assuming that economists should have some sort of normative commitment to giving advice that gets people what they want.* Now, either:

    (1) you have some sort of moral argument for that position, in which case you might not want to diss morality so much; or

    (2) your only available response when I say “but I don’t want economists to just tell people how to get what they want; I want them to take morality into account” is to tell me that you don’t want that. At which point I’ll shrug, and wander off.

    *I guess you could be making the argument that people are more likely to listen to advice that conforms to what they want. But that doesn’t seem to be the argument you’re making, and even if it were true, I don’t see that it should make me care any more than I already do about feasibility concerns.

  • conchis

    On rereading, maybe I mistook the point you were trying to make. If so, apologies. I’ll stand by the substance of my comment, but perhaps it’s not actually disagreeing with anything you said.

  • http://shagbark.livejournal.com Phil Goetz

    Postscript: When putting forth a morality, the game-theoretic, rational behavior may be to deny the validity of game-theoretic, rational behavior.

  • http://blarblog.blogspot.com Unnamed

    (In particular, [economic analysis] suggests good deals – policy packages which should be better for most everyone.)

    This parenthetical may be the most important sentence in Robin’s post. The question is: who counts? Who is part of that “everyone?”

    If a potential U.S. policy benefits most Americans but harms most of the rest of the world, what recommendation will the economist give? What if it helps most people who are currently living to get more of what they want, but makes future generations of people get less of what they want (and more of what they don’t want)? What if it mostly benefits humans, but mostly harms non-human animals? What if it benefits voters at the expense of non-voters, adults at the expense of children, or born humans at the expense of human fetuses?

    These questions do not arise for the other “amoral” advisers, since a doctor, lawyer, or financial planner is advising one individual about what will achieve his or her ends. Economists, to a large extent, are advising individuals about how to get others more of what they want. So the question naturally arises, “which others?” (Another question arises as well, about how to consider tradeoffs between those others, but I won’t get into that here.)

  • michael vassar

    I’m skeptical of the claim that everyone desires values to roughly the same degree. What about depressed people for instance, or various types of ascetics?

  • http://shagbark.livejournal.com Phil Goetz

    Given what I said above about morality helping society attain its moral values, couldn’t you argue that morality ought to be legislated?

    Perhaps our strong experience indicating that morality ought not to be legislated, is only because the moralities we are most familiar with were designed in ignorance of economics, and were refined by defectors to exploit irrationality rather than by society to use force of law.

    Mike: I think people’s desires for values are more similar than their desires for moral values. Both vary. But you won’t find many people who don’t want food and water, and I think you won’t find as many ascetics as amoralists.

    I mentioned it because if people vary in their preferences for moral values, then there is more incentive for “trade” between people who get pleasure from behaving morally, and people who don’t.

    I shouldn’t even use the word “pleasure”, since we’re also discussing people who don’t value pleasure. “Positive feedback”.

    It’s funny that we don’t have a word for the most fundamental drive in human behavior.

  • http://shagbark.livejournal.com Phil Goetz

    Mike: I think people’s desires for values are more similar than their desires for moral values. Both vary. But you won’t find many people who don’t want food and water, and I think you won’t find as many ascetics as amoralists.

    I mentioned it because if people vary in their preferences for moral values, then there is more incentive for “trade” between people who get pleasure from behaving morally, and people who don’t.

  • http://shagbark.livejournal.com Phil Goetz

    I shouldn’t even use the word “pleasure”, since we’re also discussing people who don’t value pleasure. “Positive feedback”.

    It’s funny that we don’t have a word for the most fundamental drive in human behavior.

  • http://philosophyetc.net Richard

    Robin, can you clarify what normative claims you are making here? Your title makes the evaluative claim that “morality is overrated”, and in your comment you say you are offering “an apology for ignoring ethics and just looking for deals that get people what they want.” But I don’t see that you’ve offered any argument for these normative claims.

    Your post is full of merely descriptive claims — your “bottom line” merely describes what people want. It’s not obvious that anything interesting follows from this, unless you also care to argue that ‘what people want’ has normative relevance – is *worth* satisfying – even when it’s contrary to moral requirements.

    Phil Goetz – you appear to assume a merely instrumental conception of rationality. Note that many philosophers would reject this, and indeed hold that it at least cannot be irrational to do as morality requires (and some would further claim that it is rationally required).

  • http://shagbark.livejournal.com Phil Goetz

    Phil Goetz – you appear to assume a merely instrumental conception of rationality. Note that many philosophers would reject this, and indeed hold that it at least cannot be irrational to do as morality requires (and some would further claim that it is rationally required).

    Thanks for the link; otherwise, I wouldn’t have had a clue what you meant.

    The idea that you can rationally criticize values according to their consistency is very interesting. The example of someone who cares what happens to them until New Year’s Eve, but not afterwards, is good.

    But your larger example – which I think is your motivation for holding the position – that, rationally, we should extend our consideration beyond our immediate friends and family – is dead wrong. The instinct to value friends and family has been partly explained by evolutionary psychology, and experimentally validated enough that we can say that’s probably the right explanation.

    This is why it is easiest to treat values as arbitrary. Our values are evolved instincts, and evolution can create arbitrary values. The evolved values are the ones we have to work with. If you start critiquing them rationally, you’ll end up with abstract philosophical values that no one can live with, and that aren’t evolutionarily stable. You’d be repeating, in a way, the moral mistakes of religion and Marxism.

    And you’d have a serious onion-peeling problem trying to draw a line between aspects of values that are irrational, and the root values themselves, which are by definition irrational.

  • http://philosophyetc.net Richard

    The question whether our values are coherent (rationally justified) is logically independent of the question how we came to have them. To think otherwise is to commit the genetic fallacy.

    Granted, human psychological constraints are something that needs to be taken into consideration by moral philosophers. It’s one thing to work out what the ideally rational agent would value, and quite another to specify what imperfect beings like ourselves ought to value. Roughly: we ought to adopt whatever ‘practical morality’ would in fact cause us (given our actual condition) to come closest to realizing the ideally rational ends.

    But that’s no excuse at all to simply ignore the rational question and treat all possible values as equally arbitrary. (Certainly not in an academic forum like this, where we are presumably trying to get at the truth.)

  • http://shagbark.livejournal.com Phil Goetz

    Richard: The idea that you can rationally decide what values to have, rather than taking the values that have been constructed by evolution, is even less feasible than the idea that you can rationally decide via inspection what the symbols in a knowledge representation “mean”, without taking into account how those symbols are grounded in the environment. In the latter, you can at least test different hypothesis for internal consistency. Values are not required to be consistent; in fact, they always come into conflict with each other.

    I am not aware of any way of rationally choosing values. I don’t think the ideally rational agent would value anything. If you construct a large, complex, rational artificial intelligence with lots of knowledge, it won’t do anything unless you assign it values/goals.

    What is the genetic fallacy?

  • Eneasz

    Have you looked into Desire Utilitarianism? It seems like it would address most of your concerns. As far as I’ve been able to find, it’s the ethical theory that best explains current actions and best predicts future actions. It also easily refutes the notion that 3^^^3 dust specks is worse than 50 years of torture without all the mental contortions that act utilitarians have to go through.

  • tobbic

    It seems to me that it would be beneficial for someone to strictly define the concepts in the discussion (morality, want, etc.).

    I agree with Robin Hanson that values (which constitute a morality) are essentially same as wants (if I understood correctly). Both express preference but on from different perspectives.

    Maybe morality and the consequent values represent a penalty function whereas wants represent an utility function. It seems to me they can be superimposed (and thus are the same thing). I think in the case of many ppl the penalty function is not a step function getting ‘-inf’ after some fixed value has been violated. However, my impression is that ppl tend/like to think values as absolutes rather than wants that can be traded off.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Phil, you had 7 of the 10 most Recent Comments in the sidebar. This should never ever happen. 2 is the usual limit, 3 in very rare circumstances, never 4.

    I’ve compressed some of your comments together to cut it down, but still, wait to post your replies until you see fewer instances of your name in the sidebar.

    Also, your comments were very long and you should consider writing your own blog post, then linking to it here.

  • Unknown

    Suppose you could take a pill which would instantly make it impossible for you ever to desire to do something which you believed to be immoral (unless you changed your mind about it first, i.e. completely before starting to desire to do it). Would you take the pill?

    Some people would take it. I presume that Eliezer would take it, based on earlier things he has said on similar topics. I would definitely take it.

    For someone who would take it, it doesn’t make a lot of sense to say that his desire not to desire immoral things is weaker than his immoral desires, even if in practice the latter desires are found to guide more of his actions than the former desire.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin, I do not think you are using the word Hanson::morality the way I use the word Eliezer::morality. What you call “morality” I call “moralizing” or “sanctimony” or “self-righteousness”.

    Unknown: Suppose you could take a pill which would instantly make it impossible for you ever to desire to do something which you believed to be immoral… I presume that Eliezer would take it

    If so, I would be doing it either for instrumental reasons (makes it easier to help others) or weakness of will. As a strictly local act, taking that pill would be immoral.

    My reasons for saying this should become more apparent once I get into the sequence on Fun Theory (no, it doesn’t start today) but roughly, if you simplify your psychology past a certain point, you stop being an end, and end up as a means.

  • http://philosophyetc.net Richard

    Phil – of course, we reason (practically) from the values we start with, just as we reason (theoretically) from the beliefs that we start with. In both cases, rational pressure induces us to resolve inconsistencies, minimize arbitrariness or ad hoc distinctions, and build on our existing attitude (belief/desire) sets so as to make them more internally coherent, unified, etc. (That’s why the post I linked to argues from our existing values – concern for ourselves, friends and family – to a more coherent value set that also includes concern for others who are relevantly similar.)

    N.B. You seem to be assuming some sort of foundationalism, according to which rationality starts from scratch, deriving conclusions from “self-evident” first principles. I’d agree that’s a hopeless project. Fortunately, there’s an alternative: coherentism. Start with what you’ve got, and ask whether there are any ways it could be improved. Frankly, I’d be stunned if the answer were ‘no’.

    What is the genetic fallacy?

    It’s a fallacy of irrelevance, like ad hominems, whereby you look exclusively at the historical origin of an attitude rather than assessing it on its merits. (Example: “People believe in God because it makes them feel good. Therefore, God doesn’t exist.”) Essentially, it is to confuse the projects of *explanation* and *justification*. To explain a belief or value is not thereby to show that it is wrong or unjustified. (It might be wrong or unjustified, but that requires further argument — philosophy, not psychology.)

  • Unknown

    Actually, after thinking about it a bit more, I would have to add some conditions in order to take the pill myself. But I still don’t see any reason why taking it would be immoral, so I’ll just have to wait for the Fun part…

  • Constant

    This should never ever happen.

    If the concern is that other comments are pushed away, you can vastly improve the user experience of those not wanting to miss comments by adding a “more…” link at the end of the “recent comments” pane and a separate page with a longer list of recent comments. Of course, if your purpose is to concentrate the mind by setting up arbitrary rules, you could require all commenters to post in haiku format.

  • http://shagbark.livejournal.com Phil Goetz

    Desire Utilitarianism (googling) is Preference Utilitarianism with energy minimization to find a set of preference weights that gives a local maximum of preference satisfaction. It sounded interesting at first, but a few minutes’ reflection shows that it leads to either Buddhism or Satanism.

  • Michael G.

    How do we make it so that what we want is what we should do?

  • Eneasz

    Phil, could you expand on that, if you have the time? I am flabbergasted as to why you’d think it would lead to either Buddhism or Satanism.
    For two quick reference points (well, not that quick, but not too long), the creator of DU keeps a blog, which often delves into the theory. Much of the basic groundwork is covered in:
    http://atheistethicist.blogspot.com/2007/12/morality-from-ground-up.html
    and
    http://atheistethicist.blogspot.com/2008/01/kmesons-question-acts-and-desires.html

    Michael, I’d point you to http://atheistethicist.blogspot.com/2008/01/good-resolutions-and-how-to-keep-them.html although I fear at this point I’m begining to look like a bit of a fanatical disciple. :)

  • http://entitledtoanopinion.wordpress.com TGGP

    Economist can do research in order to learn about what is efficient. There is no productive research in morality because it is an empty field, like theology or astrology. It has no impact on anticipated experience. Seeing as how there are no moral facts, there is no reason for them to be taken into account.

  • http://entitledtoanopinion.wordpress.com TGGP

    Richard’s talk of coherent justification reminds me of traditional rationality as social rules.

  • poke

    Morality is just bad psychology. Normativity is one way a descriptive theory can fail.

  • Paul Gowder

    Robin,

    I don’t think you’re saying what you think you’re saying.

    Or, to put it differently, “Overrated” with respect to what?

    You seem to be saying that morality is overrated with respect to what we want, i.e. we claim morality is more important to us than it is, so we overrate (in our utterances) the true value (as a matter of desire satisfaction) of morality.

    If that’s true (and it probably is), then you’ve stated a descriptive fact about the world. We claim to care more about morality than we actually care about it. That’s fine and dandy.

    But you want to push it further. You say that it’s a “problem.” And you say that your post constitutes an apology (a justification) for economists not focusing on ethics.

    How could it be such a thing? I’ll invoke Hume’s Law here. The claim as I’ve stated it has only descriptive premises. It can’t constitute a claim that morality is overrated, or that it’s OK for economists to focus only on what we want.

    Unless… unless… there’s a normative claim under the hood! Something… something about what we ought to do, given our preferences… And what might that normative claim be?

  • http://hanson.gmu.edu Robin Hanson

    Phil and Eliezer, I’m trying to use the word “morality” the way moral philosophers use it.

    Conchis, I’m saying your wants are rare – few want that.

    Unnamed, I mean to include all nations, all time, and even all species.

    Michael V, I don’t follow you.

    Richard and Paul, I’m not making “normative” claims, just claims about what we want. Call them “merely descriptive” if you want – I and many others find such claims persuasive regarding the actions we will take.

    Eneasz, you miss the point I think – I’m not looking for moral views.

    Tobbic, I’m not at all saying values are the same as wants.

    Michael G, I don’t want to do that at all.

  • Unnamed

    Robin, do you mean that you’re including what future people want among the “wants” that are the subject of purely economic advice? How does that square with what you say about CO2 emissions, where you seem to be counting future generations as a “moral” consideration?

  • billswift

    Susan Haack’s Evidence and Inquiry develops a new basis for epistemology by bringing together foundationalism and coherentism in a fashion where each part strengthens the weaknesses of the other. She hasn’t been completely successful, but it is a useful idea and a good start.

    For a more rigorous basis of ethics, you might try Merrill’s The Ideas of Ayn Rand. He suggests (pages 106 through 109, of the Open Court paperback edition) that oughts can be derived from facts “So, if we can agree on what morality is to accomplish, we can develop moral rules from factual statements.” Basically, he advocates unifying the operational use of ought with its normative use. Where operational use is of the form “If you want to accomplish this, then you ought to do that.” Again like Haack’s ideas, a good start though problems remain.

  • billswift

    Sorry, I meant to include a couple of sentences on Haack’s theory, which she referred to as “foundherentism”. Rather than axiomatic foundational beliefs or beliefs that are strictly judged by their coherency with other beliefs, the theory she developed used a relatively small set of foundational beliefs which were themselves supported by their mutual coherency. I can’t really describe it in any more detail since I haven’t finished reading the book.

  • http://hanson.gmu.edu Robin Hanson

    Unnamed, the weight a group would get in a deal need not be the same as the weight it morally should be given. Distant future generations get little weight in deals made today.

  • http://dl4.jottit.com/contact Richard Hollerith

    Eliezer writes, if you [do X], you stop being an end, and end up as a means.

    Aw, come on. Jump in. The water’s fine.

  • http://www.iphonefreak.com frelkins

    I understand Robin to say that he doesn’t care much about morality and neither do most people, despite their protestations. And that’s cool. But what I think may be important to note is that a market actually can – or could – take most people’s rather skimpy ‘tude towards morals and yet produce the most beautiful and moral ends. One of the benefits of capitalism is that it does not require a pure heart to enable its actors to en masse do good.

  • http://shagbark.livejournal.com Phil Goetz

    Eneasz: Re. Desire utilitarianism:
    Let the space X represent your possible actions. You have a set of functions f_i(x), for x in X, giving the degree to which action x satisfies desire i. You have a set of weights, w_i. Utility for action x is the sum U, over all i, of w_i*f_i(x). (Sum over all desires for all individuals. Note that each f_i accounts for all future repercussions of your action, appropriately discounted for time.)

    Desire utilitarianism adjusts the w_i to maximize U. An example given was that 1000 people want to torture an innocent child. U is highest if you let them do so. But dU/d(w_t), the rate at which U changes as you change the weight w_t on their desire to torture children, is negative, because increasing that desire in people leads to more frustration of other desires. Hence, you should decrease w_t.

    But then you have to decrease w_t for 1000 people. It’s easier to increase w_u, the weight assigned to the desire to be tortured, for the innocent child. (And if you can’t change weights arbitrarily, and make people enjoy being tortured, then the whole program of DU falls apart.)

    We like having a desire fulfilled, and dislike having it frustrated. So every function f_i will probably be monotonically increasing along some dimension from < 0 to > 0, then declining after saturation. This means that for every set of weights w_i that provides a local optimum at x, the set of weights -w_i probably provides a local optimum, maybe around -x. That’s the “Satanist solution”, in which all our desires are inverted: People enjoy torturing each other, and being tortured.

    The Satanist solution probably gives a higher U, because it’s easier to fail than to succeed. You will achieve a higher U if you persuade everyone to enjoy pain, failure, and loss, and then set about wreaking havoc.

    Buddhism is the trivial solution in which you set all w_i = 0. It is an optimum if satisfying desires is zero-sum. It is the optimum when satisfying desires is less-than-zero-sum. It isn’t clear that it isn’t always the optimum.

  • Unnamed

    Distant future generations get little weight in deals made today.

    So your example of only modest CO2 cuts is a policy that harms lots and lots of people who have low weight (distant future generations), not a policy that’s “better for most everyone.” Does that mean that you’re just doing standard utility calculations, summing (over individuals) weight times impact?

    How are these weights set? Giving some individuals more weight than others sounds like a moral stance to me, and not just value-neutral practical advice.

  • Psychohistorian

    “Economic analysis tries to infer what people want.”
    Pretty sure this is where problems started.

    Any attempt to infer what people want is, in this context, moral. Do we give everyone equal weight? Do we give the poor more weight? Do we give the rich more weight? Do we give the more informed more weight? Etc.

    More to the point, I don’t see how an economic analysis is able to conclude what people want. An economic analysis would, for example, create function expressing y (carbon output) in terms of x (the amount of a carbon tax). Nothing in economic analysis can tell you the appropriate target value of x. Instead, we have some system for determining an appropriate value of x. In the case of the US, x is determined via representative democracy.

    RH would probably say that x (or y) should be determined by a prediction market. This itself is something of a moral value. Another person could say that the tax should be set by those it will most harm, or those it will most benefit, or those who can best determine its effects.

    Economics is not objective when it comes to finding what people want. It is a very specific system that assigns different weights to different people based on complex criteria. Claiming that what one determines via economics is “truly” what people want is claiming far too much objectivity for the science.

  • Z. M. Davis

    Constant, I should think
    That that kind of snarkiness
    Is unbecoming

  • http://dl4.jottit.com/contact Richard Hollerith

    Z. M. Davis reads
    The Logic of Science by Jaynes,
    Stops to write haiku.

  • Constant

    It is not a snark
    If the person who wrote it
    Thinks it sound advice.

  • http://www.scheule.blogspot.com Scott Scheule

    Robin,

    Michael F, morality is not the only or best way to deal with conflicts between individual wants.

    This statement is paradoxical. To say that there is “best” means of settling conflicts is to introduce a moral element. Alternatively, you are using a definition of “morality” I don’t recognize, a meaning distinct from “normativity.”

    So far as economists phrase their findings as “If we want, this we should do this” then they are being amoral. But if the “if” clause is missing, economists are making moral judgements. You apparently would prefer the if-clause left in more often. For instance, when a while ago you suggested adoption of the “tall tax,” if you had wanted to be amoral, you should have said “if we want to maximize utility as economists define it, we should tax the tall.”

  • http://apperceptual.wordpress.com/ Peter Turney

    Robin,

    … some advise us on what we should do, while others advise us on how to get what we want. … Moral philosophers, for example, tend to emphasize policies we should pick, while economists tend to emphasize policies to better get us what we want. … After all, morality is only one of the many ends we pursue. …

    Morality is a means, not an end. Consider the Iterated Prisoner’s Dilemma. We tend to view cooperation as morally superior to defecting. Many people believe that tit-for-tat is a good moral rule. Note that tit-for-tat is an algorithm — a means. Morality does not involve altering the payoff matrix in the Iterated Prisoner’s Dilemma — it is not an end; it is not another value to put in the matrix.

    Moral rules encode hard-won wisdom about how we should best go about getting what we want. Consider a few familiar moral rules: tit-for-tat, do unto others as you would have them do unto you, watch your karma, what goes around comes around. Moral algorithms are algorithms that work better than immoral algorithms in the long run, averaged over the long term. Immorality is about short-term thinking and ignoring probabilities and risks (gambling).

    some advise us on what we should do, while others advise us on how to get what we want

    Once you see that morality is a means, not an end, this false dichotomy dissolves. The best, wisest advice about how to get what you want is also moral advice on what you should do.

    For more on this, see A Scientific Approach to Morals and Ethics.

  • http://hanson.gmu.edu Robin Hanson

    Scott, “best” is a flexible word which can refer to a wide range of metrics depending on the context. Here I meant “best at getting us what we want.”

    Peter, I’m using the word “moral” the way moral philosophers do, which is not as a means.

  • http://apperceptual.wordpress.com/ Peter Turney

    Peter, I’m using the word “moral” the way moral philosophers do, which is not as a means.

    Robin, I am a philosopher (PhD Philosophy, University of Toronto). You are not using “moral” the way I use it.

    If I understand you correctly, you agree with me that morality is really a means. So your essay above might be summarized as follows: “Let’s define morality as an end. When we do this, the conclusion is that morality is overrated.”

    If this summary is correct, then let’s take the next step, which is to conclude that we should not define morality as an end; rather, we should define it as a means.

  • http://www.scheule.blogspot.com Scott Scheule

    Robin,

    To recap then, what you mean when you say “morality is not the… best way to deal with conflicts between individual wants” is “morality is not how most people want to settle conflicts between individual wants.”

    I’m not sure what to make of that. I’m certainly not sure it’s true. You’re now introducing meta-wants. If you’re going to be so free-wheeling with what counts as a “want”, one might as well define our predilection to engage in moral philosophizing as satisfaction of a want itself, a want to engage in moral philosophizing (and presumably a subsequent want to ignore it).

    When we decide whether or not to adopt an economic plan, we’re making a moral decision. I have no idea how you can extricate morality from any policy prescription–once an economist or anyone says we should do something, that’s a moral judgment. I see no way to lessen that.

    Your argument, as I see it, is not that we should be amoral or more amoral, but that we should not reevaluate our wants, and instead just take them as given. There are problems with this. One is the deonstruction-esque “What if we want to reevalute our wants?” The second is you don’t give a reason why we shouldn’t reevalute our wants. Your other point is, I take it, that we should be honest about the policies we want. That’s uncontroversial, but of course, “one should be honest” is a moral opinion.

  • Eneasz

    Phil, I believe the error you are making is in assuming that DU is a simplistic “Maximize the amount of U” type of theory. This is the same sort of error that makes 50 years of torture seem preferable to 3^^^3 dust specks. DU is based on facts observed in the real world. Unlike many ethical theores that simply postulate rules and entities, DU attempts to explain observed phenomena and predict future results. It works off facts and observation. Thus it recognizes that some desires simply cannot be changed beyond a certain point, and/or that the effort expended to change certain desires to such a degree would cost more than would be gained.

    Any assessment that leads one to think DU would ultimatly result in Satanism or Buddhism fails to grasp real facts about the real world.

    Satanism would lead to greater sickness, injury, and death. Sickness and injury are bad things because they prevent the fullfillment of present desires, and possibly future ones as well. Death prevents even the possibility of fullfillment of the vast majority of desires.

    Buddhism in the way you describe would lead to death almost immediatly. Of course for a person who has no desires, death wouldn’t be a bad thing. However a world without desires (aside from being an impossibility) is also a world in which there isn’t any need for any ethical theory because there is no morality without desires.

  • Lake

    @ Peter: philosopher or not, you’re using the word “morality” in a peculiar way. If morals were mere means to an end, one would be free to disregard them provided one didn’t aspire to that end. But morals are supposed to apply regardless of what you want. I also suspect that the history of the metaphysics of morals would become pretty unintelligible if you were to find and replace “morals” with “means” – another clue that you may be on a different page to your colleagues.

    Your point that morals are frequently useful may well supply the basis for an ev. psych. explanation of why we have them. But that doesn’t mean that the moral and the useful are coextensive, any more than the tasty and the nutritious are.

    @ Robin and Scott: the problem seems to be that Robin is treating morality as just one lot of preferences among many, all of which must be traded off against one another to maximise some further quantity. Yet the only quantity that you could conceivably be obligated to maximise categorically is that of morality itself.

  • http://www.scheule.blogspot.com Scott Scheule

    Lake,

    I agree. I think Robin intends something different from “morality” in its typical sense. Perhaps, “non-economic goals” would be closer to when he means.

  • http://profile.typepad.com/sentience Eliezer Yudkowsky

    I don’t think you can get away with saying, “I don’t need to define how I’m using the word ‘morality’, I’m using it the standard way philosophers use it.” What standard? Which philosophers?

    People talk about what ‘should’ be done as if that word has a distinct referent from what is done – for example, Robin Hanson thinks we should rate Hanson::morality less than we do.

    For me, ‘morality’ refers to the mysterious shape of that strange word, ‘should’.

  • Unknown

    Richard Hollerith: Actually that was more or less my reaction when I read Eliezer’s response. I thought, “Are you sure that’s such a bad thing?”

    I’m not sure I would mind being a means, if the end were important enough.

  • http://hanson.gmu.edu Robin Hanson

    Many people keep trying to rephrase me as saying “we should” when I’m being very careful to avoid that. I’m saying “we want.”

    Scott, I have no objection to people re-evaluating their wants. I’m talking about what their current wants point to.

    Lake, I’m not saying morality is a preference, but I am saying we have preferences about how moral to be.

    Eliezer, moral philosophers seem to take “I should do act X now” as the same as “Act X is moral (for me now).”

  • Constant

    I agree that economists should not answer questions about morality. But the reason is not that morality is overrated. The reason is that it falls outside of the scope of economics, and almost certainly outside the competence of most economists. The appropriate picture here is not of locking morality into the closet while we get down to the business of doing what we want. The appropriate picture is herding the economists back into the pen.

  • http://philosophyetc.net Richard

    Robin – ‘I’m not making “normative” claims, just claims about what we want. Call them “merely descriptive” if you want – I and many others find such claims persuasive regarding the actions we will take.

    I’m not sure you can avoid the normative so easily. I mean, presumably the only reason you’d bother drawing attention to a merely descriptive claim is that you think it has some normative upshot, i.e. it speaks to the practical question what to do (or ‘how to live’, as the ancient ethicists put it). Economists are rather notorious for offering normative claims by stealth, under the guise of the purely descriptive. Maybe you don’t want to discuss the normative principles you’re assuming in this post. But it seems strange to pretend that there aren’t any there. (Especially when you’ve invoked so much evaluative language along the way.)

  • http://www.scheule.blogspot.com Scott Scheule

    Robin,

    In that case, what you mean isn’t clear. Seeing as it’s not reevaluated goals you’re protesting against, I repeat, the best reading of what you mean by “moral” is “non-economic values.”

    Indeed, you come close to saying so when you write:

    … If so, economic analyzes advising only modest CO2 taxes are a moral travesty, reflecting an unconscionable neglect of “non-economic” considerations.

    So you think we should (or we really want to?) neglect–or at least neglect more than we do–“non-economic” considerations. So just say that–there’s no need to bring in the bigger concept of “morality.”

    Also:

    Many people keep trying to rephrase me as saying “we should” when I’m being very careful to avoid that. I’m saying “we want.”

    Yes, but you’re saying morality is overrated. If you’re not comparing that to what we should rate it as, then what are you comparing it to? What we deep down really want to rate morality as? Is your argument thus that we’re under a false consciousness, and don’t know what we really want (and you do)?

  • conchis

    Robin,

    “I’m not making “normative” claims, just claims about what we want…I and many others find such claims persuasive regarding the actions we will take.”

    (1) Robin::wanting something doesn’t seem to be the same thing as conchis::wanting it, and I think the normative persuasiveness of your descriptive claim depends quite a bit on exploiting the ambiguity here.

    Your descriptive claim seems obviously right if you define Robin::wants as revealed preferences (although you won’t have said anything we didn’t all already know). But I think most other people’s working definition of wants is closer to what Unknown was getting at above: there’s a structure to desires and desires-about-desires that seems relevant here (plausible DU accounts usually recognise this; perhaps that’s what Enneasz was getting at?). Anyways, I’m pretty comfortable claiming that people conchis::want to be moral rather more than they Robin::want to.

    To the extent that that’s true it’s no longer obvious what we should do to help people get what they “want”. To help people get what they Robin::want we downweight moral concerns in our advice. But to help people get what they conchis::want, not so much.

    (2) Neither you nor many others can get from descriptive statements about either Robin::wants or conchis::wants to conclusions about what you and said others will do without importing some sort of normative/moral standards (at the very least you need to balance these wants against each other somehow).

    (3) Even if you could, I don’t see why it should follow that because others are imperfectly moral you should strive to emulate their imperfection. Do you think your advice should be biased because people are biased?

  • http://hanson.gmu.edu Robin Hanson

    Richard, one can speak to the practical question of what to do without speaking to the normative question of what one should do. One can instead speak to the question of what will get you what you want. For most people, this is in fact more persuasive.

  • Unknown

    Robin, are you sure those are two different questions? Once you have properly evaluated your conflicting wants? It may be that acting morally simply means correctly weighing the the things that you want, so that the moral thing to do is the thing that really gets you the most of what you most want.

  • http://www.scheule.blogspot.com Scott Scheule

    One can instead speak to the question of what will get you what you want.

    Yes, but to weigh the wants of multiple people involves questionable normative claims. If you want to avoid that, you have to speak of what will get you what you want, assuming people’s wants are comparatively weighed in manner X. Agreed?

  • http://philosophyetc.net Richard

    Robin, I don’t see two questions here. The question what to do just is the question what one should do; just as the question what to believe just is the question what one should believe. Now, one might think that the thing (one ought) to do, all-things-considered, is whatever ‘will get you what you [already, unreflectively] want’. That sounds like a dubious normative principle to me. But that’s what you’re really presupposing here.

    (Perhaps you are using the term ‘should’ in the so-called “inverted commas sense”, to refer merely to what haughty moralists *say* you should do. But philosophers use the term ‘should’ to denote what one really has most reason to do.)

  • http://hanson.gmu.edu Robin Hanson

    Richard, I see the question “what to do” as meaning “what would I choose to do, given as full a consideration as possible of that choice.” I want to do what I would do if I considered the choice fully. This is not the same as what I “should” do. I can be fully aware of what I should do and have thought carefully about my choice and yet still choose something else.

  • http://philosophyetc.net Richard

    Granted, there’s the phenomenon of “weakness of will”, whereby we act against our better judgment. But that seems to be a special case. Did you have something else in mind? (That is, do you endorse your divergent answer to the question “what to do” as your better judgment as to what act is warranted, or do you take yourself to be going wrong – by your own lights, even – in such a case?)

  • http://hanson.gmu.edu Robin Hanson

    Richard, I’m not sure how I can be any more direct or clear about this: we all knowingly make choices contrary to what we “should” choose. Yes sometimes this is due to mistakes, but it mainly reflects the fact that we do not want only to be moral.

  • http://drzeuss.blogspot.com Dr. Zeuss

    Robin, yes people tend to do what they want to do, and many people overestimate (I think this is where you’re getting “overrate”) how much weight people place on what they should do in deciding what they will do.

    But that doesn’t mean that people should place more weight on what they want to do then on what they should do. Maybe people will do what they want to do, but they should do what they should do.

  • http://philosophyetc.net Richard

    Robin, you keep putting “should” in inverted commas. Philosophers are interested in what we should do, not what we “should” do. So I’m trying to work out whether we are just talking past each other. Shifting to a neutral, unambiguous vocabulary might help. Hence my question: are the “amoral” decisions you’re talking about ones that the agent can endorse on reflection, or judge as what they have most reason (all things considered) to do?

    If so, they’re assuming normative principles. If not, the agent is irrational by their own lights. Either way, you can’t avoid the fact that decision-making — and the practical reasoning that underlies it — is essentially normative.

  • http://apperceptual.wordpress.com/ Peter Turney

    Robin,

    I’m not sure how I can be any more direct or clear about this: we all knowingly make choices contrary to what we “should” choose. Yes sometimes this is due to mistakes, but it mainly reflects the fact that we do not want only to be moral.

    I disagree. I believe that enlightened self-interest is a sufficient basis for morality. In this view, when we make choices contrary to what we “should” choose, it is always due to mistakes (lack of enlightenment). A fully enlightened being (if such a being could exist) would never knowingly make choices contrary to what “should” be chosen.

    Sometimes we feel a conflict between what we want and what we know we should do. You seem to believe that this conflict must be due to competing values (wanting to do the moral thing versus wanting to do the immoral thing). I believe that the conflict is due to bugs in our algorithms for making decisions (lack of enlightenment).

    What would it mean to want to be moral (to do the moral thing) purely for the sake of morality itself, rather than for the sake of something else? What could this possibly mean to a scientific materialistic atheist? What is this abstract, independent, pure morality? Where does it come from? How can we know it? I think we must conclude that morality is a means, not an end in itself.

  • http://hanson.gmu.edu Robin Hanson

    Richard, it is not clear to me if what we should do is the same as what we have the most reason to do. But it is clear that we often knowingly choose acts other than what we believe is the act we have the most reason to do, and other than the act we think we should do. Call us “irrational” if you will, but we expect to and intend to continue this behavior.

  • tobbic

    So a value is a rule of thumb that helps make decisions so that we get what we want in the long run (=means)?

    Is it that a value is a separate term in our total utility function? Like “don’t deceive people” = “i believe deception is wrong” = “when making decisions choose an action in which degree of deception is as low as possible. Any deviation from this will cause you huge penalty”. So you just take a function (e.g. a sum) over the separate values and preferences/wants and voila there’s your total utility. In this sense I still stick with the claim that in some contexts is useful to consider values and wants equivalent.

  • tobbic

    The fact that value is not an end in itself distinguishes values from wants/desires. It’s a heuristic to help us make better decisions (e.g. stick with social norms)?

  • Unknown

    Robin, I expect to, but do NOT intend to, continue to choose to do things I shouldn’t do and that I don’t have good reasons for doing. What this this say about me, in comparison to someone who intends to continue acting in this way?

  • http://hanson.gmu.edu Robin Hanson

    Dr, yes folks should do what they should, but don’t want to.

    Unknown, not expecting to do what you intend must feel very frustrating.

  • Nick Tarleton

    Unknown, not expecting to do what you intend must feel very frustrating.

    Are you saying you don’t? You never expect to break some resolution due to weakness of will or simple forgetfulness or whatever?

  • http://profile.typepad.com/rebeccaroache Rebecca Roache

    I wonder whether this is really true: ‘But in fact we care only moderately about what we “should” do’. In my experience, when asked to justify their actions, many people will attempt to argue that they do act morally – and, if pointed out that they do not act in the most moral way, will claim that they are unable to do so because of certain constraints (e.g. the people who say, ‘I know that buying meat that is produced cruelly isn’t ideal, but it’s cheaper and my money is limited’). Or, more rarely, they will disengage and agree that they don’t act morally, but that they don’t care. I haven’t ever come across someone who concedes that they do not always act morally, but justifies this by claiming that they care less about morality than about other considerations. Perhaps this comes down to an impoverished conceptual scheme, or to social pressure to be seen to be moral. But there is social pressure to be seen to be moral for at least one good reason: being moral, at least in part, involves being considerate of others. For that reason, the claim that ‘[w]hat we humans want is policy that considers our wants overall, without giving excess weight to morality’ is misleading. On one view of morality – namely liberalism – a policy that considers our wants overall just is a policy that gives primary weight to morality, provided that this policy recognises that people’s various wants often conflict, and that measures are sometimes necessary to prevent the wants of one person or group causing significant harm to another person or group. Having said that, I think you’re partly right: in liberal societies, people don’t see why they shouldn’t satisfy their wants providing that doing so doesn’t harm anyone else. That this is true can be seen from the fact that legislation against acts that don’t cause significant harm to others – such as homosexuality between consenting adults – is generally frowned upon in most Western societies.

    A disclaimer: I have only skim-read the other comments, apologies if I’m rehashing something that’s already been said!

  • http://hanson.gmu.edu Robin Hanson

    Rebecca, when people say “they don’t care” or when they cite “constraints”, we can usually clearly see the other considerations that weighed against morality. For example, yes money is limited, but we can see other discretionary items in their budget and infer that they preferred those items to local meat. People like to talk as if they had no choice to excuse their choices, but we know better.

    I agree there is social pressure to be moral and that one liberal ideal limits morality to dealing efficiently with conflicting wants, but most people’s concepts of morality go well beyond this.

  • http://profile.typepad.com/rebeccaroache Rebecca Roache

    I completely agree that it’s a rationalisation, but the point is that they feel pressure to be seen to be acting morally, which – however you analyse it – involves (perhaps reluctant, in some cases) recognition that there are more important things than their own wants. Those recognised other things might involve morality, or they might involve a selfish desire to avoid being seen as a certain type of person. But even the latter indirectly involves moral concerns: in this case, it is the desire not to be seen as a selfish person who always prioritises their own wants. And our condemnation of selfish people who prioritise their own wants, if such condemnation is reflective and intelligent, is primarily moral. (If it is unreflective, there may be an evolutionary explation for it.)

    It may be that most people’s conceptions of morality are more substantial than what liberalism lays out, but my point is that even the liberal conception of morality is a conception of morality, and it captures quite well the desire you attributed to humans for a policy that considers our wants overall. Therefore, humans do want a moral policy. I think that to claim that what humans want is a policy that prioritises our wants in a way that does not conform to *any* plausible conception of morality is going to be implausible. For example, the claim that all (or most) humans want a policy that prioritises their own wants at the expense of everyone else’s wants is, I believe, implausible. Some humans may want this, and perhaps many of us occasionally daydream about how nice such a policy would sometimes be, but (I hope that) few mentally healthy individuals seriously want it.

  • http://hanson.gmu.edu Robin Hanson

    Rebecca, yes there is a possible position on morality which says morality is exactly getting everyone what they want, in which case there would indeed be no conflict between morality and getting everyone what they want. But you seem to accept that most people’s concept of morality seems different from this, and thus can conflict with wants.

    I said in the post that people want to appear more moral than they want to be, and I don’t see that this theory is undermined by your observation that people publicly condemn immorality.

  • Bill Liles

    Your comments seem reasonable. A cursory study of recent history surely suggests Republicans like to wrap their greed in blankets of religiosity. Yes I agree.

  • http://www.mistacademy.com/blog/ Mathew Crawford

    I suspect we define morality differently. To me, morality defines distinction between good and bad. Of course, this distinction is not universally judged – it may be judged differently by each individual. In that sense, I tend toward a position of egoistic utilitarianism.

    However, I also tend toward a position that scales. If we behave in a way that would cause us all misery should we behave that way, then we are shooting (not a collective foot) all our feet. In other words, we need to be mindful to disentangle ourselves from prisoner’s dilemmas which pool into tragedies of the commons.

    When there is no such conflict, acting to achieve our ends is perfectly moral. I suspect both that (a) this is true most of the time — for most of our actions, and (b) that these are not the interesting decisions in life…

  • http://www.thefaithheuristic.com Justin Martyr

    Phil’s critique is interesting but, I think wrong. Desire utilitarianism is specifically limited to the domain of malleable desires – those that can be manipulated through social pressure. That means the buddhism option is right out. Moreover, satanism is probably out too given that people have prosocial traits like reciprocity, guilt, and sympathy.

    A better critique is that desire utilitarianism is nothing but nash bargaining problem between large numbers of people. The nature of the social contract and our social norms are themselves the product of a bargaining problem. Thus it suffers from all the problems that happen in bargaining situations in which there is a powerful actor with a large threat advantage. It is justice as mutual advantage in a new dress, not preference utilitarianism or anything unique. See also my criticism based on the case of the 900 racists

  • Pingback: You, me, Glenn Loury, and the Bell Curve « Entitled to an Opinion

  • Drewfus

    So econ informs on how to get what we want, and morality informs on what we should do. Econ is regarding the supply and demand of goods and services. Morality is concerned with good and bad. Ok, the first distinction is economics defining economics. The second is morality defining morality. What about each domains definition of the other domain?

    • What is the morality based definition of economics?
    • What is the economics definition of morality?

    Regarding the first point; how can the moralist hope to handle economics sans economic theory?
    On the second point; Economics deals with quantities such as supply, demand, price and efficiency. The economics definition of morality concerns how the moralist hopes to manipulate these variables, and for what affect. So take a supply and demand cross and ask; what does the moralist want to do here? It must be something to do with moving away from equilibrium. Otherwise why bother?