Tyler Cowen has a new book, Stubborn Attachments. In my next post I’ll engage his book’s main claim. But in this post I’ll take issue with one point that is to him relatively minor, but is to me important: the wisdom of the usual economics focus on preferences:
Pretty much everyone can be wrong about pretty much anything.
Thanks Robin, I now see how the situation I describe is fully accommodated by your analysis, and that there is an important sense in which our consistent choices always reflect our conception of what is good. Would you agree though that people even at their most deliberate can be wrong about the good (e.g. the millions who chose to live in dense unsanitary cities during the Plague despite the fact that this did not foster their, or anyone else's, long term well being)?
Do the drug addicts themselves want these prohibitions and protections? Marcia Angell confirms my impression that we don't really know. ( https://www.nybooks.com/art... ) From observation, I don't think addicts want the "support" of prohibitions, but I don't count this impression knowledge.
To me, illegitimate paternalism is imposing "protections" on people who don't want them, and drug prohibitions probably fit the bill--or maybe not. I think it matters. I don't think (despite a personal resentment for such measures) that protecting a group (from themselves) by imposing blanket prohibitions on society as a whole is in itself necessarily misguided.
Thanks Andaro. I appreciate your concern and should clarify that I wasn't advocating universal paternalism.
Whatever those choices are that you recommend to deal with someone who is addicted, if they are consistent in simple ways then they REVEAL a set of moral preferences. You want to push to have policies reflect those preferences.
Arch1, consider thinking up pre-commitment devices for your own future self instead of preaching the virtues of universal paternalism. If you don't trust your future self to deal with addiction, maybe you should have a service that searches your house for addictives and removes them from your vicinity. Or sign pre-commitment contracts consenting to forced intervention in case you become addicted.
If "most people have that attitude about themselves" too, then they too can consent to such pre-commitments. The problem with your language is that it is routinely used to force paternalism on others who did not consent to it. Unfortunately, this harms some core interests of at least some of us without our consent, which constitutes enemy action. Moralism is then used to make it seem legit, but of course it still remains enemy action, with retaliation costs.
There are some illegal drugs that I want to buy for rational reasons, and if you don't trust your future self to be rational, you should be able to ban yourself, rather than all of us, from buying them in the future.
Robin, I hope now that if I ever become (for example) so horribly addicted to something that it is clearly destroying my own life and the well being of my loved ones and friends, someone with guts will forcefully do their darnedest to get me out of that cycle, however much this conflicts with (what for lack of a better word we shall call) my preferences at the time.
I think that most people (perhaps even you) have that attitude about themselves. But unless I'm missing something, that attitude aligns much more with Caplan's overall approach than with yours.
Related to this discussion of moral preferences is Tyler's attempt to reconcile consequentialism with Kantian duties (i.e. universal human rights, justice, etc.). But as Robin correctly notes, different persons will have different moral preferences, and since there is no way of finding or testing for the correct set of preferences, I like Robin's idea in this post of "moral deals" or Rawlsian "moral bargaining." (I would like to read more about this possibility. Can anyone point me in the right direction?)
Moralistic framing is a social tool to increase the costs of disagreement. As far as I can tell, that's its only function. There is no moral preference that cannot be simply expressed as a personal preference instead, even if it is other-regarding.
Tyler's claim that the perpetuation of civilization is more important than justice is a statement of personal preference. Expressing it as a moral truth is merely an attempt to give it extra rhetorical weight for free. In fact, it's even slightly worse than that, because moral framing can cause a perverse incentive for preference-falsification, while the costs are externalized. Maybe saving the world ends up really expensive, and our actual preferences would be better served not even paying lip-service to the idea we should pay this cost. We would certainly be better off if the moralists accepted that the full costs of their own morality should be paid by them, and making other pay the cost is no better or worse than any other form of coerced extraction.
The usual framework for dealing with uncertainty seems fine to me here.
I don't think they change predictably. And yes, you can have uncertainty over a distribution of likely future values, but it makes the claim that we can make pareto improvements a very different one, where we're talking about pareto improvements in expectation over different future values. At the very least, it requires thinking about values very differently than the standard frameworks do.
Values that change in predictable ways as a function of context can be seen as context-dependent values, which are not at all a problem for the usual econ framework.
I was thinking about this slightly differently earlier today, and think I disagree with both Tyler and Robin, for a different reason than the ones discussed.I'm concerned that the economic and moral framework won't work if human values are in part an endogenous result of value-fulfillment. Moral pareto-improvement is possible if values are stable. We seem to empirically observe this isn't the case in a particularly worrying way, as I just argued on Lesswrong: https://www.lesswrong.com/p...
"...the key takeaway is that as humans find ever fewer things to need, they inevitably to find ever more things to disagree about. Even though we expect convergent goals related to dominating resources, narrowly implying that we want to increase the pool of resources to reduce conflict, human values might be divergent as the pool of such resources grows."