What Evidence Reluctant Authors?

Steven Landsburg is a great economics writer, with a monthly Slate column and many popular economics books.  But he is also a sharp theorist, and his October 2007 Journal of Public Economic Theory paper is in fact the best theory paper I’ve read in several years, as it helps to resolve fundamental issues in moral philosophy. 

Now unfortunately Landsburg’s paper is too philosophical to get the attention it deserves from economists, and too mathematical to get the attention it deserves from philosophers.  But one might hope that at least he would be calling as much attention as possible to his paper. Alas, it seems not.  I doubt he will mention it in his column or popular books.  And though he is speaking at our department in a few weeks, he has so far rejected my suggestion to talk on this paper; he’d rather talk about his third paper so far on quantum game theory. 

So I am left to wonder: does he know something I do not about the value of this paper?  In any case, here is the idea:

When we make "social choices," i.e., when we choose outcomes that affect many people, we want to consider how those outcomes affect these people.  And we want to consider not so much direct physical affects, but rather how outcomes affect preferences.  That is, we want outcomes that give people more of what they want, and less of what they don’t want. 

When making such a choice we must ask: who counts for how much?  For example, does a little more benefit count about the same for each of us, or do we emphasize helping those who are the worse off?  Answers to such questions can be encoded in a "social justice function," which says how to best achieve social justice when there are conflicts between our differing desires.

Now we may each care not only about ordinary wants, but also about social justice.  That is, we may each care directly about who counts how much in our social choices, and thus in effect care about which social justice function governs social choices.  Thus our social choices will have two effects: they will satisfy more or less of our ordinary wants, and they will also give us more or less of the kind of social justice we desire. 

When we can each desire different sorts of social justice, Landsburg shows that most social justice functions are not "self-justifying" in the sense of doing as well as possible on both these affects. i.e., best satisfying both ordinary wants and our thirst for social justice.  In fact, he offers some plausible conditions under which there is only one self-justifying social justice function! 

This suggests a fascinating resolution of basic questions in moral and political philosophy, such as "how much equality should we have?"  Landsburg’s answer:  just as much equality as we collectively want to have. 

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • mobile

    Maybe because he thinks the result is of no practical importance? Except to taunt social planners with accusations of cognitive dissonance? Social planning, such as it is, is too blunt of an instrument for social planners or the rest of us to worry about second-order effects. It’s definitely an interesting paper for a certain audience, but Landsburg’s books and columns would not be so popular if he didn’t emphasize the practical applications of his fields’ theories.

  • josh

    Still, I would think “history” would be more interested in this kind of paper than more practical policy discussions in the same way that we still learn Arrow’s Theorem in undergrad.

  • http://cob.jmu.edu/rosserjb Barkley Rosser

    Too many ifs in the paper. Just to take his opening, Rawlsian-inspired example, in a large society there will be no clearly worst-off individual. The best one will be able to do, in any practical situation, will be to identify a group of the worst-off, with those presumably measured by real per capita income or wealth, perhaps adjusted in certain ways such as for age or whatever. One then simply goes about trying to improve as much as possible the economic standing of this group as possible. That some may not wish to have their economic status so improved, well, that may be. But we already know that once we are dealing with a group, the Arrow theorem jumps in and we have no unambiguous social utility function coming out of these people. Big deal.

  • http://entitledtoanopinion.wordpress.com/ TGGP

    How much influence do papers on philosophy have on public policy?

    Jonathan Haidt has a lot of interesting things to say about moral intuitions and the different factors that are relevant to it. He thinks a lot of philosophical reasoning on the subject is simply affirming our gut-feelings, but he seemed to have in mind more axiomatic systems rather than the mathematical modeling Landsburg does in this paper. I recommend the latest from him at Edge.org, which I have a post about here.

  • Toby Ord

    I read the paper and it is indeed quite interesting. It looks at the problem of how to distribute resources to best reflect people’s preferences about (a) distributions and (b) the distributive principles used to do the distributing. This is an interesting question and Landsburg seems to have made some progress on it. I am not qualified to say how much progress he has made, as I’m unsure about his assumptions (mainly because my knowledge of partial derivatives and economic theorems is not so strong) and I’m not sure that *anyone* is qualified, as the philosophical side is tricky too. John Broome would be my best suggestion of someone actually qualified to comment and I’m not sure that even Landsburg grasps all the ethical issues. Given the very small audience, I think the author could go to much greater lengths to elaborate on at least one side of the issue. His few paragraphs of ethics are certainly not enough for the economists and his brief discussion of his justifications and mathematical techniques is certainly not enough for moral philosophers (I’d probably be in the top percentile of mathematical literacy amongst moral philosophers and perhaps in the top percentile of interest in this type of thing amongst moral philosophers too).

    As it happens, I do not think that the results bear on the correct ethical theories as, while he demonstrates that we have *preferences* over distributive principles used (over and above our preferences for the distributions chosen), I do not think that these preferences count morally. I understand that some people think that all preferences count (even if the preferrer could never know whether it was satistfied), but I think this is a serious mistake. I think that all that matters is their actual degree of satisfaction with their experiences (i.e. hedonistic utilitarianism, not preference utilitarianism). In this case, the goodness of a world is simply the sum of utilities and the best world is the one with the highest sum. People get some satisfaction from a belief that good is being done, but this satisfaction is already included in the calculation (hedonistic utilitarianism has thus sufficiently accounted for this issue since Bentham’s time). Any further strength of preference beyond the personal satisfaction is pure *moral* preference and should not be weighed in at all. If I want benefits for others to increase their happiness, we shouldn’t count a policy of so benefitting them as increasing my welfare (over and above whatever happiness it gives me).

  • http://cureofars.blogspot.com/ Cure of Ars

    Morality is viewed too narrowly in this paper. Let me point you to an atheist psychologist’s paper that puts morality into a bigger context. Here is a quote:

    OK, so there are two psychological systems, one about fairness/justice, and one about care and protection of the vulnerable. And if you look at the many books on the evolution of morality, most of them focus exclusively on those two systems, with long discussions of Robert Trivers’ reciprocal altruism (to explain fairness) and of kin altruism and/or attachment theory to explain why we don’t like to see suffering and often care for people who are not our children.

    But if you try to apply this two-foundation morality to the rest of the world, you either fail or you become Procrustes. Most traditional societies care about a lot more than harm/care and fairness/justice. Why do so many societies care deeply and morally about menstruation, food taboos, sexuality, and respect for elders and the Gods? You can’t just dismiss this stuff as social convention. If you want to describe human morality, rather than the morality of educated Western academics, you’ve got to include the Durkheimian view that morality is in large part about binding people together. (Jonathan Haidt)

    I would argue that Haidt view is still too narrow but I think he gets things moving in the right direction.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Cure, one shouldn’t expect every paper to address every issue.

    Toby, you seem a bit dogmatic – can you see no moral value whatsoever if giving me what I want, even if that doesn’t make me more happy?

  • http://cureofars.blogspot.com/ Cure of Ars

    Cure, one shouldn’t expect every paper to address every issue.

    My bad, I didn’t connect the dots well enough. If this morality system is based on social preferences it also depend on the social building aspects of morality. We prefer to be a part of a group. Seems to me that you can’t separate morality without destroying the whole. Let me quote another part of the paper to make this point.

    My conclusion is not that secular liberal societies should be made more religious and conservative in a utilitarian bid to increase happiness, charity, longevity, and social capital. Too many valuable rights would be at risk, too many people would be excluded, and societies are so complex that it’s impossible to do such social engineering and get only what you bargained for. My point is just that every longstanding ideology and way of life contains some wisdom, some insights into ways of suppressing selfishness, enhancing cooperation, and ultimately enhancing human flourishing.

    If preferences is made the standard and we prefer groups and culture that provide happiness, charity, longevity, and social capital then I don’t know how you do this without a costly utilitarian bid, secular style.

    I admit I have no clue what the math behind the paper is and I can tell that I am in over my head. You seemed impressed with the math part of it and that my be impressive. I will proceed to be quite.

  • Stuart Armstrong

    So I am left to wonder: does he know something I do not about the value of this paper?

    Maybe, but it’s not a safe conclusion. I’ve written papers that I never want to see or talk about again. For complicated reasons, to do with the elegance of the paper, how useful the results are to colleagues, and just how my interests have wandered. The quality of the paper is only one consideration among many.

    Yourself, Robin, you must have papers or works you never talk about today. Was that mainly due to worries about their quality, or for other reasons?

  • http://profile.typekey.com/tobyord/ Toby Ord

    Robin, I don’t think I’m being dogmatic at all, merely expressing my view. I could give a detailed justification of why I think these things, but that might take 5,000 words, so I thought it was at least worth expressing why I was not completely taken by the paper with only as much justification as time permitted. Moreover, I reached my views on preference theories of welfare on my own by reasoned argument (taking them seriously and seeing the problems this leads to). I haven’t seen my criticisms in a book or article, but would be surprised if they weren’t there (otherwise I would want to publish them).

    To answer your question, I have a very broad understanding of happiness (so broad that the word itself is misleading) I prefer ‘fulfillment’ or ‘satisfaction’. Seeing a sad movie or helping someone in distress could well give you fulfillment without giving you happiness and it would still be valuable. We often gain some fulfillment from helping others, but this is not always equal to the strength of our desire to help others. For example I want to help others even at the expense of my total fulfillment (i.e. even once you take into account the satisfaction of doing so, I would still be sacrificing). I don’t think the satisfaction of such preferences to help others should count towards *my* wellbeing above and beyond the point at which it makes me feel better. It is done for others, not for me.

    Counting preferences above and beyond the impact they have on our own mental states leads to (a) serious problems of double counting and (b) cases where someone can be ‘benefitted’ without changing their mental states (or indeed causally affecting them at all). For an example of (a), suppose there is a utilitarian who would benefit from some indivisible resource more than anyone else in the group amongst whom it is being distributed (and they are all completely self-interested). On my account the utilitarian should get the resource. On Landsburg’s view (as I understand it), it should go to someone else as this improves his wellbeing almost as much, fits entirely with his ethical view and his welfare is valued by the utilitarian as well. This is a case of double counting because the utilitarian is saying that everyone counts equally, not that he is giving an equal vote to everyone which can potentially add up with their own self interests to show that others count more than the utilitarian.

    I’m not saying that preference views are *totally* implausible for they are a damn sight better than many other views of what makes someone’s life go well. I merely disagree with them and one of the main reasons for doing so is that I find the type of double counting featured in this article to be morally implausible (not as implausible as many other things, but enough to lower my credence by quite a bit). I do think that is good to have the article as it is on one of the better moral views and seems quite neat. I think it would be valuable to philosophers (and economists) to have a version where more than a handful (possibly zero) people could understand both sides of the issue and see whether or not it really was a neat argument, or whether the assumptions let it down.

  • http://www.acceleratingfuture.com/steven steven (non-landsburg)

    Toby, it seems plausible to me that people’s experienced satisfaction would depend both on their own consumption and on the distribution as a whole. If someone experienced satisfaction from consuming more AND extra satisfaction from knowing that more of the resource went to them and not others, then a utilitarian (at least ignoring long-term issues) would take that extra satisfaction into account and let the other person have it. In effect the selfish person *would* benefit more from having the resource than the utilitarian, so there’s no paradox. But that sort of extra satisfaction is not a given — it’s also possible the selfish person didn’t care about distribution issues at all and just cared about consuming more, in which case Landsburg would say the resource should go to the utilitarian.

  • http://www.acceleratingfuture.com/steven steven

    I guess I should have said “Landsburg would say the utilitarian would say the resource should go to the utilitarian”. But now I’m rather confused.

  • http://www.acceleratingfuture.com steven

    OK, I think I get it. Landsburg models people as having utility functions depending on the social planner’s optimization target. If the social planner’s optimization target is the sum of everyone’s utility, and if there are a lot of people who hate living in a world where social planners maximize the sum of everyone’s utility, then that leads to a paradox. Landsburg, in his paper, tries to prove the existence/uniqueness of optimization targets that *don’t* suffer from such a paradox.

    As a critique of utilitarianism, it amounts to the same thing as “what if there are horrible monsters who eat people if and only if you behave like a utilitarian”, which looks to me like a confusing but probably old issue.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Toby, I didn’t mean to imply you didn’t have reasons; I was just struck by your strong confidence in rejecting such a common view. However you conceive “fulfillment”, it would see to exclude preferences I have over the world after I die, such as wanting to have a good reputation then.

  • http://profile.typekey.com/tobyord/ Toby Ord

    Steven, I agree that satisfaction often depends upon what happens to others (in fact a strong motivation for most people is to do *better than* their neighbours, not merely to do well). I’m just saying that we should count this only insofar as it affects a person’s own satisfaction (benefits to people in the devloping world do not benefit *me* in any way unless I know about them, and only then to the extent that they actually positively affect my mental states). Then we can just add up these satisfacations of the different people and we have our degree of goodness for the distribution. I don’t think we need to mess around with the fixpoints of distribution functions (however interesting it may be to do so).

  • http://www.acceleratingfuture.com/steven steven

    Toby, I don’t think that solves Landsburg’s problem. You have to take into account that people might suffer from knowing that social planners were adding up everyone’s utility; then, to maximize total utility is not to maximize it. (Though if they kept their utilitarianism a secret, there would be no paradox.) It’s a sort of “cost of information” issue.

  • Toby Ord

    Steven, you are right about a similarity between Landsburg’s idea and the horrible monsters scenario. There are various ways to interpret such a scenario and Landsburg’s approach seems to address one of them. There are indeed old answers to this, but they are not much discussed. My thesis is actually on roughly this topic, though in my thesis I don’t need to make any assumptions on the nature of the utility function (just that there is one). My confidence regarding this comes from many years of thinking about and discussing the issues. I am a moral philosopher by trade and we do actually think a lot about these topics. I’m open to persuation on the issue of preferences versus mental states, but my degree of belief in the preferences approach is low enough to warrant not spending much of my time reading up on it. My point (b) above is a good example. Sometimes the satisfaction of a preference (say that there is alien life in the andomeda galaxy right now) could not possibly have any causal effect on you. The idea that things can be good for you that have no physical effect on you is rather at odds with the physical world view. Similarly the satisfaction of the preference that I drive a Volvo (because I think they are safer) is rather dubious in adding to my wellbeing if they are not in fact safer. Preference utilitarians often try to get around such issues by limiting things to ‘rational’ preferences of some sort. I think that such an approach can be made to work, but only if they restrict it to preferences to have more positively valued mental states at which point it collapses into hedonism anyway.

  • http://acceleratingfuture.com/steven steven

    To clarify, there’s no problem with adding everyone’s satisfaction and ranking outcomes based on the result; the problem happens when implementing such a decision process has satisfaction costs (compared to implementing non-utilitarian decision processes).

  • http://profile.typekey.com/tobyord/ Toby Ord

    In which case the best process to use is whichever one leads to the best outcome when all costs are taken into account (now we are right on the topic of my thesis and little distance from Landsburg’s paper). Unfortunately we will not be able to do this perfectly as we don’t know the costs and benefits of each process and determining these will exact further costs. This lack of perfection does not stop us being able to predictably do better with certain approaches than with other ones.

  • http://www.acceleratingfuture.com/steven steven

    So from what I understand, you’re saying that a social planner should have two optimization targets, one real target that is kept secret (e.g. utilitarianism), and one “public” target that is judged according to the secret target, taking into account people’s reactions to targets. And again from what I understand, Landsburg assumes that this isn’t feasible, and the social planner’s real target has to be the same as the public target. Is that a fair description?

  • http://www.acceleratingfuture.com/steven steven

    Your thesis on decision procedures does seem very relevant here, but in Landsburg’s setting, selecting a decision procedure according to utilitarian criteria may itself have preference satisfaction costs, and taking those costs into account may have costs, and so on.

    These posts are probably all rather unclear; sorry for that. If this were a forum I’d be editing them like mad.

  • http://entitledtoanopinion.wordpress.com/ TGGP

    Would there be any way to get around people’s preferences over distribution by keeping them ignorant of that distribution? In my blogpost that I mention above, that is my personal approach to the issue of disgust.

  • Norman Siebrasse

    Robin, I’m a bit late on this, but the paper was very interesting. Thanks for raising it. I have two sets of concerns about the paper. First, the conditions for uniqueness are attractive: they are elegantly simple and they capture a plausible as a definition of what it means to have truly philosophical preferences. However, I don’t find them particularly plausible as a description of people’s actual ‘philosophical’ preferences. Hypothesis 1, that “the purely philosophical components of agents’ preferences are independent of the current allocation of goods,” strikes me as not generally true. Anecdotally, as least, many people are egalitarian while young, when they don’t have a lot of goods, and ‘meritocratic’ when older. (This accepts that the philosophical component of preferences has some independent content; as I understand it, Hypothesis 1 requires complete independence.) If I understand correctly, Hypothesis 2 requires that no one has entirely self-regarding philosophical preferences. It is plausible that some people have philosophical preferences that are not entirely self-regarding, but I don’t find it at all plausible that no one is purely selfish.

    More fundamentally, I think that the key assumption, “that people care about what kind of society they live in, independent of its effect on their material well-being,” is problematic. If our notions of morality are the product of evolution, then our preferences about what kind of society we live in are ultimately explicable in terms of the material allocations. I am not saying that people don’t care about the objective function except insofar as it affects material allocation. At a minimum, the link between individual well-being and morality is very complex, so people no doubt use evolved rules of thumb in establishing moral preferences. It is also possible that evolved preferences are out of step with current circumstances. For either of these reasons it is reasonable to model individual preferences about social order separately from their preferences about their own material well-being. But if preferences regarding the social welfare function are in effect a complex (evolutionary) function preferences over individual well-being, it is not clear why a social planner should be asked to maximize a welfare function that includes preferences over that social welfare function in addition to direct preferences over material well being. This isn’t to say that a maximand of direct individual material well being is normatively ‘right’ on evolutionary arguments, or that a maximand of current preferences broadly defined is normatively ‘wrong.’ It is enough for my argument that the maximand of current preferences is not the sole internally consistent choice. The stated goal of Landsburgs’ paper was to show that economics has something to say on the normative question about what the social planner’s objectively function should be. It seems to me that he has not succeeded, as the argument rests on a normatively arbitrary preference for the maximization of current preferences, broadly defined.

  • http://www.mccaughan.org.uk/g/ g

    I only skimmed the paper, but it looked to me as if hypothesis 2 said in effect that everyone has philosophical preferences that *oppose* their interests. That seems to me not merely implausible but outright absurd.

    It’s pretty obvious that you can’t get any normative conclusion without putting *some* sort of normative assumptions in. If Landsburg has in fact shown that the (to my mind rather weak and very reasonable) assumption of a “preference for the maximization of current preferences, broadly defined” puts heavy restrictions on what one can reasonably aim for, that seems pretty interesting even though it doesn’t amount to a derivation of all ethics form pure reason.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Norman, the paper gave sufficient conditions, not necessary ones. I expect the conditions can be considerably weakened, but that it will take a lot more math work to prove that fact. I’m not sure why you say that Landsburg has only shown things about “current” preferences; the math seems more general than that.

  • Norman Siebrasse

    “I expect the conditions can be considerably weakened, but that it will take a lot more math work to prove that fact.” Maybe, but Landsburg says “It is clear that something like Hypothesis 2 is necessary to generate a uniqueness theorem,” and the intuition behind the accompanying example (Remark 1, p.8) seems quite general. I suspect the same is true of hypothesis 1. If hypothesis 1 is not true, the topography of the objective function changes as the allocation of goods changes and in general uniqueness fails because the maximum shifts as the topography changes. I suppose it might be possible to find some condition that ensures that the maximum is stationary even though the topography otherwise changes, but I’d surprised if such a condition weren’t restrictive.

    I should emphasize that I find hypothesis 1 & 2 normatively very attractive. Hypothesis 1 says in effect that your view as to the just distribution of goods should not change as the actual distribution of goods changes, and I think Landsburg is right in saying that this formalizes this requirement “that preferences have genuine ethical content.” Hypothesis 2 says that your ethics have to be more than pure self-interest. (Hypothesis 2 does not say that your philosophical preferences must oppose your self-interest. Landsburg’s point is that at the allocation that maximizes the objective function, you will believe that you are getting more than you think you deserve, not that you must believe that you don’t deserve anything.)

    I think it is an attractive normative position to say that if your preferences don’t satisfy hypotheses 1 & 2 then we are justified in refusing to consider them in constructing the social planner’s objective function. If we were normatively willing to impose that restriction then it would seem to follow from Landsburg’s paper that there would be a unique self-justifying welfare function. Landsburg hints that hypothesis 2 formalizes the basic notion behind Kantian moral philosophy. I don’t know enough about Kantian moral philosophy to know whether he is right about this, but if he is, his paper might amount to a reconciliation of the utilitarian and Kantian traditions. If that is so, it would be a major philosophical contribution.

    My concern is that Landsburg’s opening claim is that he will show that the objective function arises endogenously, without the introduction of values from outside the economic model. For this claim to be supported his hypotheses 1 & 2 would have to be descriptively true. No doubt that is why he calls them hypotheses, rather than conditions. But if they are interpreted as conditions imposed for normative reasons, those reasons come from outside of economics. His paper may well be a major contribution to moral philosophy, but I don’t think it succeeds in divorcing economics from moral philosophy.

  • Norman Siebrasse

    “I’m not sure why you say that Landsburg has only shown things about “current” preferences; the math seems more general than that.” That was sloppy language on my part. What I meant was that from an evolutionary perspective moral preferences evolved to help us obtain resources; they are means to the end of increased resources. If that is true, why should we want the social planner to allocate resources to include moral preferences in the objective function? It’s almost like observing that someone works 80 hours a week, and so including a preference for work along with a preference for money in inferring their utility function. (The difference is that a person can tell you that they don’t like work, and they really just want the money. But economists don’t like to take what people say at face value anyway.) Ultimately this line of thought leads us to ask why we should want a social planner to maximize preferences anyway. One answer is that this is what utilitarian philosophy tells us. But once Landsburg starts to question the bases of the tradition, he invites more fundamental questioning. I think this is was a couple of earlier commentators were getting at in suggesting that he had too narrow a view of morality.