Monthly Archives: August 2019

Why Not Hi-Tech Forms?

A half century ago, when people tried to imagine a future full of computers, I’m sure one of the most obvious predictions they made is that we today wouldn’t have to work so hard to fill out forms. Filling out forms seemed then to be a very mechanical task, based on explicit mechanical rules. So once computers had enough space to store the relevant data, and enough computing power to execute those rules, we should not longer need to fill out most tedious parts of forms.

Oh sure, you might need to write an essay for a school application, or make a design for the shed when you ask your homeowner’s association permission to build a shed. But all that other usual tedious detail, no.

Now this has in fact happened for businesses, at least for standard forms and for big business. In fact, this happened many decades ago. Most of them wrote or bought programs to fill out standard forms that they use to talk to customers, to suppliers, and to government. But for ordinary people, this mostly just hasn’t happened. Oh sure, maybe your web browser now fills in an address or a credit card number on a web form. (Though it mostly gets that wrong when I try it.) But not all the other detail. Why not?

Many poor people have to fill out a lot of forms to apply for many kinds of assistance. Roughly once a year I’m told, at least. They see many of these forms as so hard to fill our that many of them just don’t bother unless they get help from someone like a social worker. So a lot of programs to help the poor don’t actually help many of those who are eligible, because they don’t fill out the forms.

So why doesn’t some tech company offer a form app, where you give all your personal info to the form and it fills out most parts of most forms for you? You just have to do the unusual parts. And they could have a separate app to give to orgs that create forms, so they can help make it easier for their forms to get filled out. Yes, much of the effort to make this work is more in standardization than in abstract computer algorithms. But still, why doesn’t some big firm do it?

I suggested all this to a social worker I know, who was aghast; she didn’t want this tech firm knowing all these details, like her social security number. But if you fill out all these forms by hand today, you are telling it all to one new org per year. Adding one firm to the list to make it all much easier doesn’t seem like such a high cost to me.

But maybe this is all about the optics; tech firms fear looking like big brother if they know all this stuff about you. Or many legal liability would fall on these tech firms if the form had any mistakes. Or maybe privacy laws prevent them from even asking for the key info. And so we all suffer with forms, and poor folks don’t get the assistance offer to them. And we all lose, though those of us who are better at filling out forms lose less.

GD Star Rating
loading...
Tagged as: , ,

How Idealists Aid Cheaters

Humans have long used norms to great advantage to coordinate behavior. Each norm requires or prohibits certain behavior in certain situations, and the norm system requires that others who notice norm violations call attention to those violations and coordinate to discourage or punish them.

This system is powerful, but not infinitely so. If a small enough group of people notice a minor enough norm violation, and are friendly enough with each other and with the violator, they often coordinate instead to not enforce the norm, and yet pretend that they did so. That is, they let cheaters get away with it.

To encourage norm enforcement, our social systems make many choices of how many people typically see each behavior or its signs. We pair up police in squad cars, and decide how far away in the police organizational structure sits internal affairs. Many kinds of work is double-checked by others, sometimes from independent agencies. Schools declare honor-codes that justify light checking. At times, we “measure twice and cut once.”

These choices of how much to check are naturally tied to our estimates of how strongly people tend to enforce norms. If even small groups who observe violations will typically enforce them, we don’t need to check as much or as carefully, or to punish as much when we catch cheaters. But if large diverse groups commonly manage to coordinate to evade norm enforcement, then we need frequent checks by diverse people who are widely separated organizationally, and we need to punish cheaters more when we catch them.

I’ve been reading the book Moral Mazes for the last few months; it is excellent, but also depressing, which is why it takes so long to read. It makes a strong case, through many detailed examples, that in typical business organizations, norms are actually enforced far less than members pretend. The typical level of checking is in fact far too little to effectively enforce common norms, such as against self-dealing, bribery, accounting lies, fair evaluation of employees, and treating similar customers differently. Combining this data with other things I know, I’m convinced that this applies not only in business, but in human behavior more generally.

We often argue about this key parameter of how hard or necessary it is to enforce norms. Cynics tend to say that it is hard and necessary, while idealists tend to say that it is easy and unnecessary. This data suggests that cynics tend more to be right, even as idealists tend to win our social arguments.

One reason idealists tend to win arguments is that they impugn the character and motives of cynics. They suggest that cynics can more easily see opportunities for cheating because cynics in fact intend to and do cheat more, or that cynics are losers who seek to make excuses for their failures, by blaming the cheating of others. Idealists also tend to say what while other groups may have norm enforcement problems, our group is better, which suggests that cynics are disloyal to our group.

Norm enforcement is expensive, but worth it if we have good social norms, that discourage harmful behaviors. Yet if we under-estimate how hard norms are to enforce, we won’t check enough, and cheaters will get away with cheating, canceling much of the benefit of the norm. People who privately know this fact will gain by cheating often, as they know they can get away with it. Conversely, people who trust norm enforcement to work will be cheated on, and lose.

When confronted with data, idealists often argue, successfully, that it is good if people tend to overestimate the effectiveness of norm enforcement, as this will make them obey norms more, to everyone’s benefit. They give this as a reason to teach this overestimate in schools and in our standard public speeches. And so that is what societies tend to do. Which benefits those who, even if they give lip service to this claim in public, are privately selfish enough to know it is a lie, and are willing to cheat on the larger pool of gullible victims that this policy creates.

That is, idealists aid cheaters.

Added 26Aug: In this post, I intended to define the words “idealist” and “cynic” in terms of how hard or necessary it is to enforce norms. The use of those words has distracted many. Not sure what are better words though.

GD Star Rating
loading...
Tagged as: , ,

Ways to Choose A Futarchy Welfare Measure

In my futarchy proposal, I suggest a big change in how we aggregate info re our policy choices, but not in how we decide what outcomes we are trying to achieve. My reason: one can better evaluate proposals that do not change everything, but instead only change a bounded part of our world. So I described choosing a “national welfare function” the way we now choose most things, via a legislature that continually passes bills to edit and update a current version. And then I described a new way to estimate what policy actions might best increase that welfare. (I also outline an agenda mechanism for choosing which policy proposals to evaluate when.)

In this post, I want to consider other ways to choose a welfare function. I’ll limit myself here to the task of how to choose a function that makes tradeoffs between available measured quantities. I won’t discuss here how to choose the set of available measured quantities (e.g, GDP, population, unemployment) to which such functions can refer. Options include:

1) As I said above, the most conservative option is to have an elected legislature vote on edits to an existing explicit function. Because that’s the sort of thing we do now.

2) A simple, if radical, option is to use a “market value” of the nation. Make all citizenships tradeable, and add up the market value of all such citizenships. Add that to the value of all the nation’s real estate, and any other national assets with market prices. With this measure, the nation would act like an apartment complex, maxing the net rents that it can charge, minus its expenses. (A related option is to use a simple 0 or 1 measure of whether the nation survives in some sufficient form over some very long timescale.)

3) A somewhat more complex option would be to define a simple measure over possible packages of measured quantities, then repeatedly pick two random packages (via that measure) and ask a random citizen which package they prefer. Then fit a function that tries predict current choices. (Like they do in machine learning.) Maybe slant the random picks toward the subspaces where citizen choice tests will add the most info to improve the current best fit function.

4) An option that requires a lot of complexity from each ciziten is to require each citizen to submit a welfare function over the measured quantities. Use some standard aggregation method to combine these into a single function. (For example, require each function to map to the [0,1] interval and just add them all together.) Of course many organizations would offer citizens help constructing their functions, so they wouldn’t have to do much work if they didn’t want to. Citizens who submit expensive-to-compute functions should pay for the extra computational that they induce.

5) Ralph Merkle (of Merkle-tree fame) proposed  that “each citizen each year report a number in [0,1] saying how well their life has gone that year”, with the sum of those numbers being the welfare measure.

I’m sure there must be other interesting options, and I’ll add them here if I hear of some. Whatcha got?

A common issue with all these mechanisms is that, under futarchy, every time a bill is considered, those who trade on it acquire assets specified in terms of the then-current national welfare measure. So the more often that the official welfare measure changes, the more different kinds of assets are in circulation. These assets last until all the future measures that they refer to are measured. This is a reason to limit how often the official measure changes.

Inspired by a conversation with Teddy Collins.

Added 22Aug: Some polls on this choice:


The status quo approach is the most popular option, followed by market value and then fitting random picks.

GD Star Rating
loading...
Tagged as:

Paternalism Is About Status

… children, whom he finds delightful and remarkably self-sufficient from the age of 4. He chalks this up to the fact that they are constantly lied to, can go anywhere and in their first years of life are given pretty much anything they please. If the baby wants the butcher knife, the baby gets the butcher knife. This novel approach may not sound like appropriate parenting, but Kulick observes that the children acquire their self-sufficiency by learning to seek out their own answers and by carefully navigating their surroundings at an early age. … the only villagers whom he’s ever seen beat their children are the ones who left to attend Catholic school. (more)

Bofi forager parenting is quite permissive and indulgent by Western standards. Children spend more time in close physical contact with parents, and are rarely directed or punished by parents. Children are allowed to play with knives, machete, and campfires without the warnings or interventions of parents; this permissive patently style has been described among other forager groups as well. (more)

Much of the literature on paternalism (including my paper) focuses on justifying it: how much can a person A be helped by allowing a person B to prohibit or require particular actions in particular situations? Such as parents today often try to do with their children. Most of this literature focuses on various deviations from simple rational agent models, but my paper shows that this is not necessary; B can help A even when both are fully rational. All it takes is for B to sometimes know things that A does not.

However, this focus on justification distracts from efforts to explain the actual variation in paternalism that we see around us. Sometimes third parties endorse and support the ability of B to prohibit or require actions by A, and sometimes third parties oppose and discourage such actions. How can we best explain which happens where and when?

First let me set aside situations where A authorizes B to, at some future date, limit or require actions by A. People usually justify this in terms of self-control, i.e., where A today disagrees with future A’s preferences. To me this isn’t real paternalism, which I see as more essentially about the extra info that B may hold.

Okay, let’s start with a quick survey of some of the main observed correlates of paternalism. Continue reading "Paternalism Is About Status" »

GD Star Rating
loading...
Tagged as: , ,

A Model of Paternalism

Twenty years ago this month I started my job here at GMU. My “job talk paper”, which got me this job, was on a game theory model of paternalism. While the journal that published it insisted that it be framed as a model of drug regulation, it was in fact far more general. (Why would a journal be reluctant to publish a general result? The econ journal status hierarchy dictates that only top journals may publish general results.) Oddly, I’ve never before discussed that paper here (though I discussed related concepts here). So here goes.

Here’s the abstract:

One explanation for drug bans is that regulators know more than consumers do about product quality. But why not just communicate the information in their ban, perhaps via a “would have banned” label? Because product labeling is cheap-talk, any small market failure tempts regulators to lie about quality, inducing consumers who suspect such lies to not believe everything they are told. In fact, when regulators expect market failures to result in under-consumption of a drug, and so would not ban it for informed consumers, regulators ex ante prefer to commit to not banning this drug for uninformed consumers.

Consider someone choosing how much alcohol or caffeine to drink per day on average. The higher is the quality of alcohol or caffeine as a drink, in terms of food, fun, productivity and safety, then the more they should want to drink it. However, they are ignorant about this quality parameter, and so must listen to advice from someone who knows more. Furthermore, this advisor doesn’t exactly share their interests; for the same value of quality, this advisor might want them to drink more or less than they would want to drink. Thus the advisor has a reason to be not entirely honest with their advice, and so the listener has a reason to not believe everything they are told.

When the advisor can only advise, we have a standard “cheap talk signaling game”. In equilibrium, the advisor picks one of a limited number of quality options. For example, they might only say either “bad” or “good”. The person being advised will believe this crude advice, but would not believe more precise advice, due to the incentive to lie. The closer are the interests of these two people, the more distinctions the advisor can make and be believed, and thus the better off both of them are on average.

My innovation was to give the advisor the additional option to, instead of offering advice, ban the person from drinking alcohol or caffeine. The result of a ban is a low (though maybe not zero) level of the activity. When quality happens to be low, the advisor would rather ban than give the lowest possible advice. This is in part because the listener expects the advisor to ban when quality is low. So even when their interests differ by only a little, the advisor bans often, far more often than they would if the listener was perfectly informed about quality.

My model wasn’t about alcohol in particular; it applies to any one-dimensional choice of an activity level, a choice influenced by an uncertain one-dimensional quality level. Thus my model can help us understand why people placed into a role where they can either advise or ban some activity would often ban. Even when both parties are fully rational, and even when their interests only differ by small amounts. The key is that even small differences can induce big lies and an expectation of frequent bans, which force the advisor to ban often because extreme advice will not be believed.

My model allows for relatively general functional forms for the preferences of both parties, and how those depend on quality. It can also handle the case when the advisor has the option to “require” the product, resulting in some high consumption level. (Though I never modeled the case where the advisor has both the option to ban or require the product, in addition to giving advice.) The model can also be easily generalized to varying levels of info for both parties, and to random errors in the choices made by both parties. The essential results don’t change much in those variations.

The main theorem that I prove in my paper is for the case where the advisor’s differing interest makes that advisor prefer a higher activity level for any given quality level. For example, the advisor might be an employer and the listener might be their employee. In this case, for any given quality level, the employer might prefer their employee to drink more caffeine than the employee would choose, in order to be more productive at work. What I prove is that on average both parties are better off in the game where the advisor is not able to ban the activity; this is because the option to ban reduces the activity level on average.

Similarly, when the advisor prefers a lower activity level for any given quality level, both parties are better off when the advisor is not able to require the activity. This could apply to the case where the activity is alcohol, and the advisor is the government. Due to the possibility of auto accidents, the government could prefer less alcohol consumption for any given level of alcohol quality.

This main theorem has direct policy relevance for things like medicines, readings, and investments. If policy makers tend to presume that people on average consume too few medicines, read too little, and invest too little, then they should regret having the ability to ban particular medicines, readings, or investments, as this ability will on average make both sides worse off.

So that’s my model. In my next post, I’ll discuss how much this actually helps us understand where we do and don’t see paternalism in the world.

GD Star Rating
loading...
Tagged as: ,

Against Irony

Papua New Guinea. There are nearly 850 languages spoken in the country, making it the most linguistically diverse place on earth. … Mountains, jungles and swamps keep villagers isolated, preserving their languages. A rural population helps too: only about 13% of Papuans live in towns. …. Fierce tribal divisions—Papua New Guinea is often shaken by communal violence—also encourages people to be proud of their own languages. The passing of time is another important factor. It takes about a thousand years for a single language to split in two, says William Foley, a linguist. With 40,000 years to evolve, Papuan languages have had plenty of time to change naturally. (more)

British printer who used a mirrored question mark to distinguish rhetorical questions in 1575, and John Wilkins, a British scientist who proposed an inverted exclamation mark to indicate irony in 1668. … The problem with adopting new irony punctuation is that if the people reading you don’t understand it, you’re no better off. … The ironic punctuation mark that the social internet can claim as its own is the sarcasm tilde, as in, “That’s so ~on brand~” … But tildes can feel a bit obvious. For a wryer mood, a drier wit, one might try a more subdued form of ironic punctuation—writing in all lowercase. …

Irony is a linguistic trust fall. When I write or speak with a double meaning, I’m hoping that you’ll be there to catch me by understanding my tone. The risks are high—misdirected irony can gravely injure the conversation—but the rewards are high, too: the sublime joy of feeling purely understood, the comfort of knowing someone’s on your side. No wonder people through the ages kept trying so hard to write it. (more)

Just as the urge to signal loyalty to people nearby has kept New Guinea folks from understanding people over the next mountain, our similar urge pushes us to write in ways that make it hard for those outside our immediate social circles to understand us. Using irony, we sacrifice ease of wide understanding to show loyalty to a closer community. 

Language is like religion, art, and many other customs in this way, helping to bond locals via barriers to wider interaction and understanding. If you think of yourself instead as a world cosmopolitan, preferring to promote world peace and integration via a global culture that avoids hostile isolationist ties to local ethnicities and cultures, then not only should you like world-wide travel, music, literature, emigration, and intermarriage, you should also dislike irony. Irony is the creation of arbitrary language barriers with the sole purpose of preventing wider cultural integration. 

GD Star Rating
loading...
Tagged as: ,