7 Comments

I am extremely uncomfortable with the conflation of harms inflicted by violent threat with "harms" that do not entail violence. The whole idea of inflicting extreme forms of organized, irresistible violence such as taxation on persons accused of causing non-violent harms to others (e.g. causing distress to the envious by being richer or prettier) is completely repugnant.

This is especially poignant in the context of the state: State-inflicted violence is extremely cheap - once a coercive monopoly exists, it is as cheap as paying a uniformed thug 15$ an hour to enforce whatever law the state machine came up with. On the other side there is an enormous harm inflicted on the victims of the law, the harm of being exposed to and broken by violence. Therefore, whoever manages to get a hold of the levers of state power can extremely cheaply inflict almost arbitrary levels of violent harm on his victims - and the state is *built* to prevent any form of measuring the relative moral weights of violent harms inflicted on a law's victims vs. the "harms" allegedly prevented by the law.

This situation is the antithesis of moral efficiency that Robin wants to achieve - there is no bargaining process or any other effective way of measuring the relative weights of harms involved, and therefore using the state to achieve one's goals is extremely likely to increase the overall harms inflicted on members of the ingroup.

Would you think otherwise?

Expand full comment

Ok, here's a context and a "harm".

Context: Horseless carriages are becoming commonplace.

Harm: Unemployment in horse-related industries: stable hands, street scoopers, buggy whip manufacturers, saddle makers, etc.

Question: Should government act, (and if so how) to mitigate this harm.

Expand full comment

I agree, this does seem to be dodging the issue.

On the subject of non-physical, subjective externalities (like envy, racism, etc) I have a hard time taking many utilitarians seriously. In the debates I've seen, I never heard one deal with the issue that racist and jealous preferences are mutable and subject to change by incentives.

It isn't so much that racists and jealous people's suffering is justifiable as it is that jealousy and racism are memes which we would do well to stamp out. Its here that I think the average person's moral intuition is correct: we shouldn't reward bad culture because that might lead to more bad culture. In the short-term we might be better off if racist and jealous people were satisfied, but in the long-term we could reduce racism and jealousy and be better off still. Unfortunately the longer into the future we look, the less we are able make accurate predictions and to out-perform deontological heuristics.

I also have a hard time taking policy recommendations which do not look all costs very seriously. Robin mentions these costs so I do take his opinions seriously. I would like to hear some mention of how markets already take transaction costs into account when allocating public goods, and compare the market costs vs. government costs.

Expand full comment

>The answer would be “yes” as in these things should be taken into consideration.

According to what moral framework?

Expand full comment

The answer would be "yes" as in these things should be taken into consideration.

There are two caveats - first, for variety practical reasons it might make sense not to do anything about such things anyway. Many deontological rules are useful policy heuristics based on what real world works like, even if you're a consequentialist and don't accept them as absolute.

And second - soft measures like paying members of the minority to move away, or mildly taxing conspicuous consumption to reduce it while keeping everyone's positional may be Pareto-preferred over both not doing anything and over hard measures. I don't believe in economics' absolutist position that every single problem can be solved by adjustment of monetary incentives, but I think such measures are vastly underutilized in modern politics.

Expand full comment

This discussion seems to dodge a substantive disagreement between Robin and Will. The disagreement boils down to one's stance on a classic critique of utilitarianism: Should we a priori consider the desires of people when those desires are intuitively represensibile. Two examples:

(1) If a populations consisted of tiny minority and a vast racist majority who experienced strong anxiety by the mere presence of the minority, would it be moral to (say) relocate the minority against its collective will? Should the racist desires of the majority be considered intrinsically?

(2) If many people experience envy (an emotion/desire which has been argued to be immoral) when their neighbor builds a bigger house, causing them to build even bigger houses then they otherwise would like, can this justify a tax on conspicuous consumption a la Robert Frank? Should we consider the mental distress associated with envy at all?

Utilitarians, I think, are required to answer "yes" to all those questions (although they will spend much time wringing their hands about the implausibility of the examples or all the unintended consequences that should be taken into account). Many deontologists will answer "no". Likewise, collectivists will tend to answer "yes" and libertarians "no".

It appears to me that in this post you simply have identified the situation where, because of practical reasons, utilitarians and deontologists happen to agree on policy.

Expand full comment

"... we shouldn’t count things as harms if others might plausibly persuade us to change our minds on what is really a harm."

I basically agree with this, but I think it could be enhanced to be more general. Specifically, I don't see any need to reference 'others,' to reference any mental entities besides oneself. It would be better to simply say:

"we shouldn't count things as harms if we might plausibly change our minds on what is really a harm."

This is simpler and more general. It is really just an imploration to listen to your doubt: if you aren't fully certain that something is bad, don't label it as such. If you think there is still evidence to explore on a matter, don't act like you know everything.

I only bring this up because invoking 'others' makes this sound like a social rule, when it's simply about recognizing when your information is incomplete and reserving the real option to make a better decision in the future after more information becomes available. Whether that info comes through social channels, from the non-mental environment or from one's own thought processes doesn't matter for the logic.

Expand full comment