What Do I Want To Know?

Reading the novel Lolita while listening to Winston’s Summer, thinking a fond friend’s companionship, and sitting next to my son, all on a plane traveling home, I realized how vulnerable I am to needing such things. I’d like to think that while I enjoy such things, I could take them or leave them. But that’s probably not true. I like to think I’d give them all up if needed to face and speak important truths, but well, that seems unlikely too. If some opinion of mine seriously threatened to deprive me of key things, my subconscious would probably find a way to see the reasonableness of the other side.

So if my interests became strongly at stake, and those interests deviated from honesty, I’ll likely not be reliable in estimating truth. Yet as my interests fade to zero, I also suspect my opinions to be dominated by random weak influences, such as signaling pressures, that also have little to do with truth. My reliability seems contingent on my having atypically good incentives to get it right.

So on what topics do I have good incentives? Of course this is also a subject on which I may have poor incentives for accuracy. If things precious to me depended on my believing I had good incentives, well then I’d believe that, even if untrue. What to do?

It seems my safest place to stand for drawing inferences is on my most robust beliefs about good incentives. And for me, that place is prediction markets. Since prediction markets seem to give robustly good incentives on a rather wide range of topics, I should believe what they say, and think I’d have more reliable beliefs if we had more such markets. I might think we don’t need them much on certain safe topics, because we already have good reliable other ways to estimate such topics. But I just can’t trust such judgements that much – they might also be biased.

Of course I can’t know that I or we will be better off by having more truthful estimates on any particular topic. I might think that on certain topics we’d be better off not knowing. But I can’t trust that judgement greatly – it would be better to rely on prediction markets on this meta question, of what we’d be better off not to know.

Someday hopefully we’ll have many prediction markets, and maybe even futarchies, to guide humanity through the many shoals ahead, including on what we’d do better not to know. Of course we might be mistaken about what we value, and so ask futarchies about the wrong consequences, thus inducing mistakes about what we’d rather not know. So it is especially important to consider the values in which we have the most confidence.

You might argue that your best estimate is that we are in fact seriously mistaken on what we value, so mistaken that we would ask futarchies the wrong questions, and then such markets would mislead us on what we’d be better off not to know. You might instead recommend that we follow your suggestions about what we should know, and what to believe in the absence of the prediction markets you advise against. And well, you might be right. But really, what grounds do you have have for confidence in that set of judgements? Why should we trust your judgement on the good quality of the incentives for your intuitions?

GD Star Rating
Tagged as:
Trackback URL:
  • Hedonic Treader

    Robin, I was wondering how you reconcile your hope for the power of prediction markets with the fact that in actual markets that make predictions about the future values of houses, stocks, bonds etc., you have irrational patterns of bubbles building up and collapsing.

    I grew up with the naive illusion that such markets would self-correct in time because people would start betting against the irrational mainstream, but in the last couple of years we saw that taxpayers were forced to intervene with their own wealth in these markets just in order to save the system itself from the mess resulting from the market failures. Doesn’t this let us conclude that, in fact, markets don’t actually produce “swarm rationality” but “swarm irrationality”?

    • roystgnr

      When the market correctly predicts that a risky investment will be profitable, it’s hard to call that a market failure. If the profit mechanism begins with “taxpayers were forced to intervene with their own wealth”, however, there’s definitely something outside the market that needs to be fixed.

      You’re more on target with the discussion of bubbles in general. When 51% of the population is acting irrationally, the benefit of markets over social democracy isn’t that you can trust the results, it’s that you can opt out of and even bet against the results.

      Ironically, one of the mechanisms for market bubbles is excessive faith in market results. If everybody trusts that the market isn’t overvaluing something, they ignore the risks in buying it, and then the market will overvalue it.

    • Dustin

      Robin doesn’t say prediction markets are perfect rational decision makers.

      He says they are better than what we normally use.

    • I grew up with the naive illusion that such markets would self-correct in time because people would start betting against the irrational mainstream, but in the last couple of years we saw that taxpayers were forced to intervene with their own wealth in these markets just in order to save the system itself from the mess resulting from the market failures. Doesn’t this let us conclude that, in fact, markets don’t actually produce “swarm rationality” but “swarm irrationality”?

      The important question isn’t whether or not markets are rational or irrational, but whether or not markets are more rational than the next best alternative. Showing that markets have a certain kind of bias doesn’t tell us much about which institutions we should use for making decisions, unless it is also shown that a non-market institution lacks that particular bias (and is less biased on net).

  • Russ Anderson

    @Headonic: I’ve been thinking the same thing.

    @Robin, how can you continue to have such confidence in the ability of markets as forecasting tools when the credit crisis suggests that they did not accurately identify the biggest financial risk to our economy in the past 60 years.

    Doesn’t the failure of markets to accurately price in these risks cause you to question your fundamental premise on the validity of markets as prediction mechanisms?

    • But the markets did correctly predict the outcome. The market predicted the market makers would be bailed out, and that is what happened. The free market extends even to government intervention and government bail outs.

  • Constant

    the mess resulting from the market failures.


  • Michael Vassar

    I think that the way to build incentives that favor truth is to a) feel entitled to lie. People feel much more entitled to commit thought-crimes if they don’t fee that they have to admit them, and b) seek actionable models and act on them. Related to this is seeking a much better life, not just slightly better.

  • Pingback: Incentives are incompatible with the Truth – as is the lack of incentives « Nation of Beancounters()

  • Michael Vassar: great suggestions, but related how? You probably mean that if you want a much better life, you should do both a) and b), right? Or that if you want to win as much as possible, you’ll search for and employ ruthlessly whatever works?

    • Michael Vassar

      I mean that a normal distribution of outcomes is granted to society in response to a normal distribution of luck and various types of obedience, but outlier results in life have to be built, not passively received. If you aim to produce outlier results, you will have to acquire true beliefs. False ones will keep causing ‘unlucky’ failures.

  • Michael Vassar: I’m interested in the defense of the much vs. slightly distinction. Intuitively, slightly + slightly + slightly = much. Much of human progress comes from slight improvement, though much also comes from massive improvements.

    • Michael Vassar

      Intuitively, to me, many incremental improvements to bows lead to an increase in effectiveness, relative to early bows, much greater than the increase from late bows to early firearms, but within a paradigm, slight improvements consume low-hanging fruit. No amount of advance would have created a bow as effective as a Kalashnikov.

  • Matt Knowles

    Can Prediction Markets work with a limited bettor base? I ask because it occurs to me that there might be questions the US Govt wants to pose, the answers to which it doesn’t want to be generally known. For example, might they use a Prediction Market where they allow only cleared analysts to place bets? And would those Markets still have value, even though they are so limited?

  • You had better have an anonymous/pseudonymous blog where you can say what you want without fear of losing those things you cherish. I think I say less controversial things, and I still wouldn’t be comfortable putting my real name (not that anybody would recognize) to them.

    I noted where my writings may be less reliable here.

  • georgi

    You can’t decide this question without deciding which **values** matter because the skew of prediction markets and the way they influence the world will change which values win out. And that is an old question. There is no way to convince a skeptic that a different set of values than his would be correct. This is why at some point we have to rely on some sort of tribal notion of what is “good” or “correct” behavior. If Robin thinks the values decision can be outsourced to some prediction market or to any neutral mechanical system, then he’s back to the flaws of the ultra rationalists. Given that choice, I’d rather rely on my intuitions and my flawed tribal loyalties, than “truthful” decisions that might skew towards values I would consider alien and unacceptable.

    P.S. Bear in mind that Robin can’t even establish that locally improving “truth” on certain narrow margins improves human welfare.