For a while now I’ve been tired of the US political drama, and I’ve been hoping that others would tire of it as well. Then maybe we could talk about something else, like say, my books. So I was thinking of writing a post reminding folks about futarchy, saying that politics doesn’t have to be this way. That is, we could largely (if not entirely) separate the political processes that deal with facts and values. In this case, even when there’s a big change in which values set policy, the fact estimates that set policy could remain the same, and be very expert.
In contrast, most of our current political processes mix up facts and values. The candidates we vote for, the bills they adopt, and the rulings that agencies make, all represent bundles of opinions on both facts and values. As a result, the fact estimates implicit in policy choices are less than fully expert, as such estimates must appeal to the citizens, politicians, administrators, etc. who we choose in part for their value positions. And so, to influence the values that our systems uses, we must each talk about facts as well, even when we aren’t personally very expert on those facts.
On reflection, however, I think I had it wrong. Most of those engaged by the current US political drama are enjoying it, even if they say otherwise. They get a rare chance to feel especially self-righteous, and to bond more strongly with political allies. And I think the usual mixing of facts and values actually helps them achieve these ends. Let me explain.
For the purpose of making effective decisions, on average the best mix of fact vs. value in analysis has over 90% of the attention go to facts. Yes, you need to pay some attention to values, but most of the devil is in the details, and most of the relevant details are on facts. This is true at all levels, including personal, family, firm, church, city, state, and national levels.
However, for the purpose of feeling self-righteous and bonding with allies, value talk is much more potent than fact talk. You need to believe that your values are superior to feel self-righteous, and shared values bond you with allies much more strongly than do shared facts. Yet even for this purpose, the ideal conversation isn’t more than 90% focused on values; something closer to a 50-50 mix works better.
The problem is that when we frame a debate as a pure value disagreement, we actually find it harder to feel enough obviously superior, and to dismiss the other side. We aren’t really as confident in our value positions as we pretend. We can see how observers might perceive a symmetry between us and our opponents, and label us unfair if we just try to crush the other side to achieve our values at the expense of their values.
However, by mixing enough facts into a value discussion, we can explain to ourselves and others why crushing them is really best for everyone. We can say that they just don’t understand that global warming is a real thing, or that kids really need two parents to grow up healthy. It is the other side’s failure to accept key facts that can justify to outsiders our uncompromising determination to crush them for a total win. Later on they may see we were right, and even thank us. But even if that doesn’t happen, right now we can feel justified in dismissing them.
I expect this dynamic plays out not only in national politics, but also in firm, church, and family politics. And it helps explain our widespread reluctance to adopt prediction markets, and other neutral fact estimation methods such as experiments, in relatively political contexts. We regularly want to support decisions that advance the values we share with our political allies, but we prefer the cover of seeming to be focused on estimating facts. To successfully use facts as a cover for values, we need to have enough fact issues mixed into our debates. And we need to avoid out-of-control fact estimation mechanisms that lack enough adjustment knobs to let us get the answers we want.
To explain things like opposition to prediction markets notice that the kind of estimation mechanisms people tend to be so opposed to aren't those *they* just lack control over but those that don't give control to anyone with normal sensibilities.
People are perfectly happy to agree that we should ask an expert to weigh in when deciding if it is ethical to engage in some kind of medical trial, apply some kind of embryonic sex selection mechanism or offer poor people money for organs or sex. They will agree to this in principle even without any control over what that expert will say or who they will be despite the risk the expert will be allied against them and have direct policy influence. Yet we seem to be most hesitant about experiments, prediction markets and the like not when they directly control outcomes but when they bear on socially charged (but more decision remote) facts like innate gender differences, myths of national pride and etc..
An possible explanation of these attitudes is that pure value signals are too easy to costlessly fake so we use factual claims to communicate group membership, e.g., claims about innate differences not values of gender equality to evaluate your worth as a feminist ally.
When the estimation mechanism is a bunch of people we can count on them to understand these factual signals and aver from forcing us into a consensus which signals animosity toward some group we value. If controversial data comes up economist experts can be counted on to convey it relatively diplomatically but the results of experiments or prediction markets can force us into positions where the clear fact we are bound to accept is of the form that would signal disloyalty to an ally group.
On reflection I think there might be two different phenomena at play.
There is definitely the "I want to talk about how evil republicans/democrats are so let me toss up some facts that make them look bad and me good," e.g., did you know about the emoluments clause. I didn't realize the first time I read your post you were talking about this kind of situation because I don't think this kind of use of facts has much impact on the actual choices people make (except maybe occasionally being backed into rhetorical corners). When the tables are turned and the same fact pattern fits their guy people just come up with a reason it's not comparable.
However, uncontrollable estimation mechanisms really don't create any problems for fact-soldiers like this. Indeed, since we quote facts in this way largely by selecting a few facts from a large repository the *more* prediction mechanisms with a whiff of legitimacy the better as I can just cite the one I want. If your theory was correct it should equally well predict that people should oppose things like organized queries of economists views on school vouchers, minimum wage etc.. but everyone seems fine with this idea even without specific questions in mind.