

Discover more from Overcoming Bias
Twenty years ago this month I started my job here at GMU. My “job talk paper”, which got me this job, was on a game theory model of paternalism. While the journal that published it insisted that it be framed as a model of drug regulation, it was in fact far more general. (Why would a journal be reluctant to publish a general result? The econ journal status hierarchy dictates that only top journals may publish general results.) Oddly, I’ve never before discussed that paper here (though I discussed related concepts here). So here goes.
Here’s the abstract:
One explanation for drug bans is that regulators know more than consumers do about product quality. But why not just communicate the information in their ban, perhaps via a “would have banned” label? Because product labeling is cheap-talk, any small market failure tempts regulators to lie about quality, inducing consumers who suspect such lies to not believe everything they are told. In fact, when regulators expect market failures to result in under-consumption of a drug, and so would not ban it for informed consumers, regulators ex ante prefer to commit to not banning this drug for uninformed consumers.
Consider someone choosing how much alcohol or caffeine to drink per day on average. The higher is the quality of alcohol or caffeine as a drink, in terms of food, fun, productivity and safety, then the more they should want to drink it. However, they are ignorant about this quality parameter, and so must listen to advice from someone who knows more. Furthermore, this advisor doesn’t exactly share their interests; for the same value of quality, this advisor might want them to drink more or less than they would want to drink. Thus the advisor has a reason to be not entirely honest with their advice, and so the listener has a reason to not believe everything they are told.
When the advisor can only advise, we have a standard “cheap talk signaling game”. In equilibrium, the advisor picks one of a limited number of quality options. For example, they might only say either “bad” or “good”. The person being advised will believe this crude advice, but would not believe more precise advice, due to the incentive to lie. The closer are the interests of these two people, the more distinctions the advisor can make and be believed, and thus the better off both of them are on average.
My innovation was to give the advisor the additional option to, instead of offering advice, ban the person from drinking alcohol or caffeine. The result of a ban is a low (though maybe not zero) level of the activity. When quality happens to be low, the advisor would rather ban than give the lowest possible advice. This is in part because the listener expects the advisor to ban when quality is low. So even when their interests differ by only a little, the advisor bans often, far more often than they would if the listener was perfectly informed about quality.
My model wasn’t about alcohol in particular; it applies to any one-dimensional choice of an activity level, a choice influenced by an uncertain one-dimensional quality level. Thus my model can help us understand why people placed into a role where they can either advise or ban some activity would often ban. Even when both parties are fully rational, and even when their interests only differ by small amounts. The key is that even small differences can induce big lies and an expectation of frequent bans, which force the advisor to ban often because extreme advice will not be believed.
My model allows for relatively general functional forms for the preferences of both parties, and how those depend on quality. It can also handle the case when the advisor has the option to “require” the product, resulting in some high consumption level. (Though I never modeled the case where the advisor has both the option to ban or require the product, in addition to giving advice.) The model can also be easily generalized to varying levels of info for both parties, and to random errors in the choices made by both parties. The essential results don’t change much in those variations.
The main theorem that I prove in my paper is for the case where the advisor’s differing interest makes that advisor prefer a higher activity level for any given quality level. For example, the advisor might be an employer and the listener might be their employee. In this case, for any given quality level, the employer might prefer their employee to drink more caffeine than the employee would choose, in order to be more productive at work. What I prove is that on average both parties are better off in the game where the advisor is not able to ban the activity; this is because the option to ban reduces the activity level on average.
Similarly, when the advisor prefers a lower activity level for any given quality level, both parties are better off when the advisor is not able to require the activity. This could apply to the case where the activity is alcohol, and the advisor is the government. Due to the possibility of auto accidents, the government could prefer less alcohol consumption for any given level of alcohol quality.
This main theorem has direct policy relevance for things like medicines, readings, and investments. If policy makers tend to presume that people on average consume too few medicines, read too little, and invest too little, then they should regret having the ability to ban particular medicines, readings, or investments, as this ability will on average make both sides worse off.
So that’s my model. In my next post, I’ll discuss how much this actually helps us understand where we do and don’t see paternalism in the world.
A Model of Paternalism
Re: "If policy makers tend to presume that people on average consume too few medicines, read too little, and invest too little, then they should regret having the ability to ban particular medicines, readings, or investments, as this ability will on average make both sides worse off."
If removing an action an agent can voluntarily take makes them worse off, then the likely explanation is because they are stupid - otherwise they would choose not to take that action in the first place.
fixed; thanks.