27 Comments

“I'm not sure why you say that Landsburg has only shown things about "current" preferences; the math seems more general than that.” That was sloppy language on my part. What I meant was that from an evolutionary perspective moral preferences evolved to help us obtain resources; they are means to the end of increased resources. If that is true, why should we want the social planner to allocate resources to include moral preferences in the objective function? It’s almost like observing that someone works 80 hours a week, and so including a preference for work along with a preference for money in inferring their utility function. (The difference is that a person can tell you that they don’t like work, and they really just want the money. But economists don’t like to take what people say at face value anyway.) Ultimately this line of thought leads us to ask why we should want a social planner to maximize preferences anyway. One answer is that this is what utilitarian philosophy tells us. But once Landsburg starts to question the bases of the tradition, he invites more fundamental questioning. I think this is was a couple of earlier commentators were getting at in suggesting that he had too narrow a view of morality.

Expand full comment

"I expect the conditions can be considerably weakened, but that it will take a lot more math work to prove that fact." Maybe, but Landsburg says "It is clear that something like Hypothesis 2 is necessary to generate a uniqueness theorem," and the intuition behind the accompanying example (Remark 1, p.8) seems quite general. I suspect the same is true of hypothesis 1. If hypothesis 1 is not true, the topography of the objective function changes as the allocation of goods changes and in general uniqueness fails because the maximum shifts as the topography changes. I suppose it might be possible to find some condition that ensures that the maximum is stationary even though the topography otherwise changes, but I'd surprised if such a condition weren't restrictive.

I should emphasize that I find hypothesis 1 & 2 normatively very attractive. Hypothesis 1 says in effect that your view as to the just distribution of goods should not change as the actual distribution of goods changes, and I think Landsburg is right in saying that this formalizes this requirement "that preferences have genuine ethical content." Hypothesis 2 says that your ethics have to be more than pure self-interest. (Hypothesis 2 does not say that your philosophical preferences must oppose your self-interest. Landsburg's point is that at the allocation that maximizes the objective function, you will believe that you are getting more than you think you deserve, not that you must believe that you don't deserve anything.)

I think it is an attractive normative position to say that if your preferences don't satisfy hypotheses 1 & 2 then we are justified in refusing to consider them in constructing the social planner’s objective function. If we were normatively willing to impose that restriction then it would seem to follow from Landsburg's paper that there would be a unique self-justifying welfare function. Landsburg hints that hypothesis 2 formalizes the basic notion behind Kantian moral philosophy. I don’t know enough about Kantian moral philosophy to know whether he is right about this, but if he is, his paper might amount to a reconciliation of the utilitarian and Kantian traditions. If that is so, it would be a major philosophical contribution.

My concern is that Landsburg’s opening claim is that he will show that the objective function arises endogenously, without the introduction of values from outside the economic model. For this claim to be supported his hypotheses 1 & 2 would have to be descriptively true. No doubt that is why he calls them hypotheses, rather than conditions. But if they are interpreted as conditions imposed for normative reasons, those reasons come from outside of economics. His paper may well be a major contribution to moral philosophy, but I don’t think it succeeds in divorcing economics from moral philosophy.

Expand full comment

Norman, the paper gave sufficient conditions, not necessary ones. I expect the conditions can be considerably weakened, but that it will take a lot more math work to prove that fact. I'm not sure why you say that Landsburg has only shown things about "current" preferences; the math seems more general than that.

Expand full comment

I only skimmed the paper, but it looked to me as if hypothesis 2 said in effect that everyone has philosophical preferences that *oppose* their interests. That seems to me not merely implausible but outright absurd.

It's pretty obvious that you can't get any normative conclusion without putting *some* sort of normative assumptions in. If Landsburg has in fact shown that the (to my mind rather weak and very reasonable) assumption of a "preference for the maximization of current preferences, broadly defined" puts heavy restrictions on what one can reasonably aim for, that seems pretty interesting even though it doesn't amount to a derivation of all ethics form pure reason.

Expand full comment

Robin, I’m a bit late on this, but the paper was very interesting. Thanks for raising it. I have two sets of concerns about the paper. First, the conditions for uniqueness are attractive: they are elegantly simple and they capture a plausible as a definition of what it means to have truly philosophical preferences. However, I don’t find them particularly plausible as a description of people’s actual ‘philosophical’ preferences. Hypothesis 1, that “the purely philosophical components of agents’ preferences are independent of the current allocation of goods,” strikes me as not generally true. Anecdotally, as least, many people are egalitarian while young, when they don’t have a lot of goods, and ‘meritocratic’ when older. (This accepts that the philosophical component of preferences has some independent content; as I understand it, Hypothesis 1 requires complete independence.) If I understand correctly, Hypothesis 2 requires that no one has entirely self-regarding philosophical preferences. It is plausible that some people have philosophical preferences that are not entirely self-regarding, but I don’t find it at all plausible that no one is purely selfish.

More fundamentally, I think that the key assumption, “that people care about what kind of society they live in, independent of its effect on their material well-being,” is problematic. If our notions of morality are the product of evolution, then our preferences about what kind of society we live in are ultimately explicable in terms of the material allocations. I am not saying that people don’t care about the objective function except insofar as it affects material allocation. At a minimum, the link between individual well-being and morality is very complex, so people no doubt use evolved rules of thumb in establishing moral preferences. It is also possible that evolved preferences are out of step with current circumstances. For either of these reasons it is reasonable to model individual preferences about social order separately from their preferences about their own material well-being. But if preferences regarding the social welfare function are in effect a complex (evolutionary) function preferences over individual well-being, it is not clear why a social planner should be asked to maximize a welfare function that includes preferences over that social welfare function in addition to direct preferences over material well being. This isn’t to say that a maximand of direct individual material well being is normatively ‘right’ on evolutionary arguments, or that a maximand of current preferences broadly defined is normatively ‘wrong.’ It is enough for my argument that the maximand of current preferences is not the sole internally consistent choice. The stated goal of Landsburgs’ paper was to show that economics has something to say on the normative question about what the social planner’s objectively function should be. It seems to me that he has not succeeded, as the argument rests on a normatively arbitrary preference for the maximization of current preferences, broadly defined.

Expand full comment

Would there be any way to get around people's preferences over distribution by keeping them ignorant of that distribution? In my blogpost that I mention above, that is my personal approach to the issue of disgust.

Expand full comment

Your thesis on decision procedures does seem very relevant here, but in Landsburg's setting, selecting a decision procedure according to utilitarian criteria may itself have preference satisfaction costs, and taking those costs into account may have costs, and so on.

These posts are probably all rather unclear; sorry for that. If this were a forum I'd be editing them like mad.

Expand full comment

So from what I understand, you're saying that a social planner should have two optimization targets, one real target that is kept secret (e.g. utilitarianism), and one "public" target that is judged according to the secret target, taking into account people's reactions to targets. And again from what I understand, Landsburg assumes that this isn't feasible, and the social planner's real target has to be the same as the public target. Is that a fair description?

Expand full comment

In which case the best process to use is whichever one leads to the best outcome when all costs are taken into account (now we are right on the topic of my thesis and little distance from Landsburg's paper). Unfortunately we will not be able to do this perfectly as we don't know the costs and benefits of each process and determining these will exact further costs. This lack of perfection does not stop us being able to predictably do better with certain approaches than with other ones.

Expand full comment

To clarify, there's no problem with adding everyone's satisfaction and ranking outcomes based on the result; the problem happens when implementing such a decision process has satisfaction costs (compared to implementing non-utilitarian decision processes).

Expand full comment

Steven, you are right about a similarity between Landsburg's idea and the horrible monsters scenario. There are various ways to interpret such a scenario and Landsburg's approach seems to address one of them. There are indeed old answers to this, but they are not much discussed. My thesis is actually on roughly this topic, though in my thesis I don't need to make any assumptions on the nature of the utility function (just that there is one). My confidence regarding this comes from many years of thinking about and discussing the issues. I am a moral philosopher by trade and we do actually think a lot about these topics. I'm open to persuation on the issue of preferences versus mental states, but my degree of belief in the preferences approach is low enough to warrant not spending much of my time reading up on it. My point (b) above is a good example. Sometimes the satisfaction of a preference (say that there is alien life in the andomeda galaxy right now) could not possibly have any causal effect on you. The idea that things can be good for you that have no physical effect on you is rather at odds with the physical world view. Similarly the satisfaction of the preference that I drive a Volvo (because I think they are safer) is rather dubious in adding to my wellbeing if they are not in fact safer. Preference utilitarians often try to get around such issues by limiting things to 'rational' preferences of some sort. I think that such an approach can be made to work, but only if they restrict it to preferences to have more positively valued mental states at which point it collapses into hedonism anyway.

Expand full comment

Toby, I don't think that solves Landsburg's problem. You have to take into account that people might suffer from knowing that social planners were adding up everyone's utility; then, to maximize total utility is not to maximize it. (Though if they kept their utilitarianism a secret, there would be no paradox.) It's a sort of "cost of information" issue.

Expand full comment

Steven, I agree that satisfaction often depends upon what happens to others (in fact a strong motivation for most people is to do *better than* their neighbours, not merely to do well). I'm just saying that we should count this only insofar as it affects a person's own satisfaction (benefits to people in the devloping world do not benefit *me* in any way unless I know about them, and only then to the extent that they actually positively affect my mental states). Then we can just add up these satisfacations of the different people and we have our degree of goodness for the distribution. I don't think we need to mess around with the fixpoints of distribution functions (however interesting it may be to do so).

Expand full comment

Toby, I didn't mean to imply you didn't have reasons; I was just struck by your strong confidence in rejecting such a common view. However you conceive "fulfillment", it would see to exclude preferences I have over the world after I die, such as wanting to have a good reputation then.

Expand full comment

OK, I think I get it. Landsburg models people as having utility functions depending on the social planner's optimization target. If the social planner's optimization target is the sum of everyone's utility, and if there are a lot of people who hate living in a world where social planners maximize the sum of everyone's utility, then that leads to a paradox. Landsburg, in his paper, tries to prove the existence/uniqueness of optimization targets that *don't* suffer from such a paradox.

As a critique of utilitarianism, it amounts to the same thing as "what if there are horrible monsters who eat people if and only if you behave like a utilitarian", which looks to me like a confusing but probably old issue.

Expand full comment

I guess I should have said "Landsburg would say the utilitarian would say the resource should go to the utilitarian". But now I'm rather confused.

Expand full comment