Dealism, Futarchy, and Hypocrisy

Many people analyze and discuss the policies that might be chosen by organizations such as governments, charities, clubs, and firms. We economists have a standard set of tools to help with such analysis, and in many contexts a good economist can use such tools to recommend particular policy options. However, many have criticized these economic tools as representing overly naive and simplistic theories of morality. In response I’ve said: policy conversations don’t have to be about morality. Let me explain.

A great many people presume that policy conversations are of course mainly about what actions and outcomes are morally better; which actions do we most admire and approve of ethically? If you accept this framing, and if you see human morality as complex, then it is reasonable to be wary of mathematical frameworks for policy analysis; any analysis of morality simple enough to be put into math could lead to quite misleading conclusions. One can point to many factors, given little attention by economists, but which are often considered relevant for moral analysis.

However, we don’t have to see policy conversations as being mainly about morality. We can instead look at them as being more about people trying to get what they want, and using shared advisors to help. We economists make great use of the concept of “revealed preference”; we infer what people want from what they do, and we expect people to continue to act to get what they want. Part of what people want is to be moral, and to be seen as moral. But people also want other things, and sometimes they make tradeoffs, choosing to get less morality and more of these other things.

When organizations must make choices, and people talk together about those choices, they may well try to persuade each other by referring to outside advice. To be effective in influencing the policies that individuals will privately push for, such shared advice must persuade its audience than it will help them to get more of what they want. And one good way to do this is for shared advisors to help identify policy choices that tend to be closer to what we economists call the “Pareto frontier” of wants. This is the set of outcomes where no one can get more of what they want without someone else getting less.

Of course even when one can identify this frontier of wants, negotiations over organization policies will still contain a “zero-sum” element of choosing a point on that frontier. Each change from one frontier point to another gives some people more of what they want, at the cost of giving other people less. Even so, it can be quite useful for negotiators to know more about the location of this frontier, as moving the space of policies being considered toward the frontier offers the potential to give everyone more of what they want. And economic tools of analysis are quite directly useful for achieving this goal.

Organization policy choices resulting from negotiations and pressures from different people who want different things can be seen as “deals.” I’ve given the name “dealism” to this framing of policy discussions as being more about what people want, and shared advice as about locating the frontier of wants. Preference Utilitarianism claims that it is morally right and good to give everyone more of what they want. But dealism does not make this claim. Dealism instead says that as everyone wants to induce deals where they get more of what they want, they thus also want policy conversations to be influenced by shared advisors who can point everyone toward the Pareto frontier of wants. Dealism sees a big place for policy conversations that are about such wants and deals. There can be conversations that are primarily about the morality of policy choices, but there can also be other sorts of conversations.

My proposal to use decision markets as the basis of a form of governance, futarchy, can be seen as a dealist approach to governance. In it, a polity must choose an explicit measure of the outcomes that will be preferred by their collective choices, and then betting markets consistently and effectively give them more of this measure. This would push political conversations to be more explicitly about what everyone wants, relative to how they can get it.

In my opinion, one of the strongest criticisms of futarchy is that people prefer more hypocritical forms of governance. Humans like to pretend to want some things, while actually wanting other things, and human minds and culture are highly adapted to such hypocritical conversations. Having policy conversations mix up value and fact considerations makes such hypocrisy easier. Futarchy would instead force people to be clearer about what they want.

A similar criticism can be made against dealism more generally. We like to pretend that morality gets higher weights in our wants than it actually does. This pretense is aided by the pretense that policy conversations are mainly about morality. We must sure care a lot about morality if that is the main topic of our policy conversations! We over-emphasize morality relative to our other wants, and also values of all sorts relative to facts. But for this to work, our morality needs to have a lot of context dependent flexibility, so we can cloak other wants as moral considerations. And we want our shared advisors, at least the ones we pretend to listen to, to also seem to talk mostly about morality.

This all suggests that dealism may be more true than most of us want to admit. We want to actually listen to advisors who point us toward the Pareto frontier of wants, while pretending to listen to advice that is presented as mainly being about morality. These can either be two different groups of advisors, or it can be the same group where the actual basis for their advice is different from what they pretend. For example, we can pretend to listen to pundits while actually listening to hard-headed economists. Or we can listen to apparently soft-headed economists who are actually hard-headed.

If we individually don’t have much influence over policy, yet our associates still judge us strongly on our policy opinions, then the tradeoff is more at the group level, between our groups seeming to focus on morality, versus our groups getting what they collectively want. Individuals will then want to push for policies that their allies will see as making their groups seem to be focused on morality, while actually giving those groups more of what they want. Individual wants won’t matter so much.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • I don’t doubt this is the case as morality is often invoked selectively for its argument value than anything eternal or universal. A greater problem may be that relative status is zero sum while many will delude or collude to seeking more even if it means less overall and denial, lying, or obfuscation are not beyond them and facts only get in the way of what they want. They will just select the morals that further their ends, and it is less about hypocrisy than values and the importance of disguising their values for acceptability.

    • Disguising your values for acceptability IS hypocrisy.

      • Unless you have disguised them from yourself, self deception being common, you can be honest but wrong, though honest people usually care about whether they are right rather than be unquestioning, but when your income depends on it..

  • Part of what people want is to be moral, and to be seen as moral. But people also want other things, and sometimes they make tradeoffs, choosing to get less morality and more of these other things.

    This doesn’t see to bear any recognizable relation to moral thinking. It’s true that people often articulate their moral intuitions this way, because we have been acculturated to do so, but given bounded cognition, it’s still a reasonable stance for people to trust their moral intuitions more than their local revealed preferences over a series of transactions, as a way to steer reality towards configurations compatible with their long-run values.

    • It sounds like you are saying that people SHOULD want only to be moral, with nothing else added. I’m not disagreeing with that. I’m saying that as a matter of consistent persistent fact, they DO not want only to be moral.

    • Joe

      Your concern is regarding people doing things that give them small amounts of pleasure in the short term but are a big net cost over their whole lifetime, right? If so, one solution is to note that “you now” and “you in the future” are like two different people, only the former gets to make decisions which can have enormous consequences for the latter without actually consulting them first.preferences

      In this model, people blowing their life savings on slot machines looks more like an inability to coordinate across time than a self-destructive but valid single set of preferences.

      • Michael Vassar

        What’s the point of slot machine winnings if not the future spending they are a predictor of. One can plausibly interpret some sorts of hedonism, including Ben’FB games and drug dealers, in terms of time coordination failure, but not slot machines or financial intermediaries.

      • Joe

        The point is the thrill of gambling, surely? That’s a positive experience in itself, even if doesn’t result in any financial gain.

  • Moral conflicts energize policy discussions, in the same way that you hope to channel political polarization. (I wonder why you like the former idea but not the latter reality.) If you succeed in stripping facts of their moral interpenetration, few folks will even be interested in betting on your markets.

    • The fact that wants are more in near mode, and idealism more in far mode, suggests passions are actually stronger about wants. I suspect it is hypocritical morality talk that is in fact the most energized.

      • Politics is inherently far, inasmuch as a person pursuing his wants would rationally abstain from it. (Voter’s paradox, etc.) When functioning in far mode, far objects may be more compelling. Also, as you note in a subsequent posting, “we see things far away in less detail, and draw inferences about them more from top level features and analogies than from internal detail. Yet even though we know less about such things, we are more confident in our inferences!” The greater confidence in far-mode inferences contributes to the ability of far-mode to energize near-mode conation

        If far-mode doesn’t energize near-mode, then how do you explain the power of the “sacred objects” studied by sociologists or the distance between many voters’ politics and their mundane wants?

        [On the energization of near-mode by far-mode, see “Cognitive Dissonance: The Glue of the Mind.” – ]

      • I won’t claim to understand much about where emotional energy comes from.

      • Romeo Stevens

        Emotional energy is a hard to fake signal about which moral alliances/schelling points we will commit to. Humans are bad actors, so the best way to get game theoretic cooperation is to expend energy in the direction of being the sort of person who gets worked up about the schelling point. Then sunk cost and identity consistency effects help enforce what has been signaled.

  • Pingback: Rational Feed – deluks917()

  • Romeo Stevens

    People prefer inefficient outcomes if the efficient frontier outcome is sufficiently unequal. This is because they are worried about other agents having even more bargaining power in future tradeoffs. I think this is basically a tribal instinct.

    • But how does knowing about a more efficient *frontier* set of points cause a more unequal outcome point?

      • Romeo Stevens

        It doesn’t cause an unequal outcome immediately.

        Let’s say that a butter maximizer is arguing with a guns maximizer but knows that due to power differentials in the negotiation that point B is the likely outcome. The butter maximizer prefers point C’ (marked red) over point B due to worries that guns maximizer getting more guns will make future bargaining positions even worse for them.

        You might wonder why, if C can’t move towards D, why they might be able to move towards C’ instead, but real life is complicated. They might find it easier to burn resources to move towards C’. In an ideal world this would merely be part of the above board negotiations, driving the gun maximizer to be more willing to slide towards D, but uncertainty (and the degree to which future actions can be made explicit) might prevent that.

        This shows up in real life such as when Obama made the comment that even a tax revenue neutral tax on wall street might be desirable due to “fairness considerations.”

      • “real life is complicated” is a very weak argument. It might suggest that your claim might be true, because hey most anything might be true. But that’s hardly enough to raise the probability of your claim.

  • Marian Andrecki

    Vocabulary question: Is there a difference between dealism and contractarianism?