Suspecting Truth-Hiders

Tyler against bets:

On my side of the debate I claim a long history of successful science, corporate innovation, journalism, and also commentary of many kinds, mostly not based on personal small bets, sometimes banning them, and relying on various other forms of personal stakes in ideas, and passing various market tests repeatedly. I don’t see comparable evidence on the other side of this debate, which I interpret as a preference for witnessing comeuppance for its own sake (read Robin’s framing or Alex’s repeated use of the mood-affiliated word “bullshit” to describe both scientific communication and reporting). The quest for comeuppance is a misallocation of personal resources. (more)

My translation:

Most existing social institutions tolerate lots of hypocrisy, and often don’t try to expose people who say things they don’t believe. When competing with alternatives, the disadvantages such institutions suffer from letting people believe more falsehoods are likely outweighed by other advantages. People who feel glee from seeing the comeuppance of bullshitting hypocrites don’t appreciate the advantages of hypocrisy.

Yes existing institutions deserve some deference, but surely we don’t believe our institutions are the best of all possible worlds. And surely one of the most suspicious signs that an existing institution isn’t the best possible is when it seems to discourage truth-telling, especially about itself. Yes it is possible that such squelching is all for the best, but isn’t it just as likely that some folks are trying to hide things for private, not social, gains? Isn’t this a major reason we often rightly mood-affiliate with those who gleefully expose bullshit?

For example, if you were inspecting a restaurant and they seemed to be trying to hide some things from your view, wouldn’t you suspect they were doing that for private gain, not to make the world a better place? If you were put in charge of a new organization and subordinates seemed to be trying to hide some budgets and activities from your view, wouldn’t you suspect that was also for private gain instead of to make your organization better? Same for if you were trying to rate the effectiveness of a charity or government agency, or evaluate a paper for a journal. The more that people and habits seemed to be trying to hide something and evade incentives for accuracy, the more suspicious you would rightly be that something inefficient was going on.

Now I agree that people do often avoid speaking uncomfortable truths, and coordinate to punish those who violate norms against such speaking. But we usually do this when have a decent guess of what the truth actually is that we don’t want to hear.

If if were just bad in general to encourage more accurate expressions of belief, then it seems pretty dangerous to let academics and bloggers collect status by speculating about the truth of various important things. If that is a good idea, why are more bets a bad idea? And in general, how can we judge well when to encourage accuracy and when to let the truth be hidden, from the middle of a conversation where we know lots of accuracy has been being sacrificed for unknown reasons?

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • IMASBA

    “For example, if you were inspecting a restaurant and they seemed to be trying to hide some things from your view, wouldn’t you suspect they were doing that for private gain, not to make the world a better place? If you were put in charge of a new organization and subordinates seemed to be trying to hide some budgets and activities from your view, wouldn’t you suspect that was also for private gain instead of to make your organization better?”

    Robin would feel right at home at the NSA… What I’m trying to say is that this sounds like framing the issue: “they don’t want to participate in prediction markets, ergo they’re uncomfortable with the truth and hiding it”. It is perfectly possible (and probable) that they actually believe the mechanisms of prediction markets will harm truthfinding in the long run.

    • oldoddjobs

      “harm truthfinding”, wow.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    Expressing one’s degree of confidence in a belief does not constitute a “more accurate expression of a belief,” since it has nothing to do with describing the belief’s referent. In the usual case, there is no value in communicating your subjective degree of confidence in your beliefs. That’s why, after all, degree of confidence is almost never expressed in classic prose. It’s a snare and a folly committed by bad bloggers and Bayesian formalists. I’m not for banning it; “flaming” it, maybe.

    Consider that in a court of law, an attorney is prohibited from expressing his personal beliefs about the case. Now wouldn’t that be nice information for the jury to have?

    • Ari

      There’s no information in expressing your confidence? Why not?

      I’ve witnessed people not expressing their actual confidence, including myself, and leaving that information out of an argument “for the sake of good”. It is not the best strategy if everyone else is also playing overconfident. But that doesn’t mean it is not information.

      I think most people are completely unaware of their actual confidence in their beliefs unless somehow made responsible. I’ve become more aware of mine over time, and that doesn’t mean I always say it, but at least I admit it would have value.

      edit: Removed irrelevant point.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        First, I’m really more concerned with arguers (not) expressing their confidence rather than admitting their lack of confidence. If you have definite doubts, you should say so. That’s just honesty. If you’re unsure of a factual matter, you shouldn’t pretend to confidence–not if you’re interested in truth. But being confident of an intellectual position tells much more about the person than his belief. When the discussion is about _opinion_, appeals to authority are improper, and that’s not because they’re “uninformative.” “Information” can serve as diversion rather than elucidation in an ongoing discussion.

        Second, I didn’t say in the comment you’re responding to that there’s no information in expressing confidence. I did say there’s no “value” in doing so. But that’s highly relative to the kind of discussion. In discussions of controversial abstract matters, it’s a distraction.

        Do you think scientific reports should include information about the researcher’s [personal, not statistical] degree of confidence in his conclusions? Even if we had the technology to discover it objectively using brain scans or something, would that information be helpful? It’s simply confusion to say, as Robin seems to, that such information helps make the belief itself clearer, which was my main point in the above comment. I would be frankly appalled to see such information in a research report. Why? Because rather than being some Bayesian advance on weighing evidence, it would signal a reversion to unsophisticated primal attitudes where intellectual controversy is really about a personal conflict involving the proponent. It creates a status issue where we should strive to eliminate status as a consideration.

    • Enrique

      “Consider that in a court of law, an attorney is prohibited from expressing his personal beliefs about the case.” True, so why apply the Turing Test to law, see http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1978017

  • Lord

    Are you so sure he just doesn’t believe in efficient markets and sees nothing to add to it? No point to betting if you think the odds are balanced.

  • Siddharth

    Wow. There should be a Robin Hanson translation of everything.

    • Ari

      Haha, yes indeed. When A.I. gets there, maybe we can sell that as a product.

  • Pingback: Assorted links

  • free_agent

    Tyler writes, “The quest for comeuppance is a misallocation of personal resources.”

    That’s certainly not true when the contest is for status within the chattering class!

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      Contests for status are a misallocation of resources. This is really the main point.

      • free_agent

        Contests for status are brutally important for those who are engaging in them. Some participants will be culled in each generation, and it’s crucial not to be one of them.

        “Consider the matter of status competition. Mr. Roberts, like so many before him, argues that conspicuous consumption is an unhappy zero-sum game. But this is of course true of most forms of competition: Most academics I know can rank-order everyone in the room at a professional conference with the speed and precision of a courtier at Versailles. Any competition, from looks to money to academic credentialing, both consumes a lot of resources and makes many of the participants feel bad about themselves. Why, then, does the literature on status competition always tell us that we should redistribute capital gains or inheritances and never tell us that we should redistribute academic chairs or book contracts?”
        – Megan McArdle reviews “Shiny Objects” by James A. Roberts

  • GeorgeNYC

    “And surely one of the most suspicious signs that an existing institution isn’t the best possible is when it seems to discourage truth-telling, especially about itself.”

    Is this really true? I know I am paraphrasing here but I recall a discussion by the Bush administration where they were defining some of their inaccuracies because they believed that they were “creating” the reality on the ground. Is “truth” really a positive good? For some people, (for example religious fanatics) I would think that their preferences re to actually deny the physical “truth” around them. One could say the same about the use of drugs (even just alcohol).

    I am sure I am straying far off topic. I appreciate the high quality of discussion on this site. However, I felt compelled to write because your post really got me thinking about this.

  • Pingback: On Bets | This is Ashok.

  • Firionel

    Surely there must be a distinction between “someone in control is hiding something” and “by and large the members of an institution conspire not to tell each other the truth”.

    This is why the restaurant example appears misleading to me: There is a clearly identifiable actor in possession of an (knowably accurate) piece of information actively suppressing it (keeping others from obtaining it). But most scenarios people have in mind when discussing prediction markets are very different from that situation: while there is a whole lot of information, it is distributed among many actors and of a certain degree of uncertainty. It is not even necessarily the case that anybody is holding information back consciously.

    In particular the moral implications are quite different then: While in the restaurant case ther is something akin to lying, the more complex scenario is marked by something closer to (possibly fake) indifference.

  • F.E. Guerra-Pujol (Enrique)

    Follow up post: I want to defend both Tyler and Robin by explaining why each one might be right, depending on whether we look at bets at a “micro” or individual level or a “macro” or aggregate level. In summary, if we look at a large number of bets on a given topic or question at a macro level, then Robin is right: all these bets in the aggregate tell us something, and this is why prediction markets are so powerful and should be legalized. But at the same time, each individual bet on a micro scale may not necessarily reveal all that much information, for the reasons Noah and others have given (and, I would add, because individuals might make inconsistent bets over time), so Tyler is right when we look at bets on a micro or individual scale …

    - See more at: http://marginalrevolution.com/marginalrevolution/2013/07/assorted-links-842.html#sthash.BENSNblx.dpuf

    • http://overcomingbias.com RobinHanson

      An aggregation of bets could not give us info if each one did not give us info on average as well.

      • F.E. Guerra-Pujol (Enrique)

        But doesn’t all the noise at the micro level cancel out?

      • http://overcomingbias.com RobinHanson

        Some does, not all.

  • Ashok Rao

    Futarchy is an extremely strong claim, and let’s say someone is (reasonably) skeptical that it will work. He’s also reasonably skeptical that you actually believe this will work.

    How do you reveal your true belief that this will work? Seems like “betting” on it won’t work because your whole theory assumes betting works and that is just begging the logical question.

  • Michael Strong

    I’m very glad to see Robin persisting in this argument, and flabbergasted that Tyler doesn’t see the value of betting as a way to improve the extent to which more accurate information becomes disseminated more quickly.

    There are many topics on which elite academics have been mistaken for long periods of time, whereas those with less elite reputations were significantly more accurate. For instance, economists such as Milton Friedman and Peter Bauer who emphasized that market-oriented economies would grow more quickly than various statist economies in the 1950s, 60s, 70s, and 80s were widely regarded as “ideological.” In the meantime, economists such as Samuelson and Galbraith predicted convergence of GDP per capita between communist and “capitalist” economies. Had predictions been a relevant factor in academic reputation, the views of Friedman and Bauer would have replaced those of Samuelson and Galbraith far more quickly than actually took place – and hundreds of millions, perhaps billions, of people might have escaped poverty more quickly.

    This is not to imply that Friedman was always right nor that Samuelson was always wrong. Nor is this to claim that there is not a role for intellectual speculation removed from empirical prediction.

    But I do see Robin’s proposals as a significant improvement upon academic publishing alone as the basis for identifying which propositions about reality are more likely to serve as a sound basis for taking action. I would trust prediction markets over the opinions of “reputable scholars” in most of the social sciences most of the time.

    The only reason we grant academia money and status is on the belief that academia, as an institution, is an efficient mechanism for identifying “truths.” In the sciences, this assumption seems largely accurate. Outside the hard sciences, results may vary, to say the least. I see prediction markets and reputational bets as the best strategy for improving the signal to noise ratio that currently exists in academic social science.

    If more professors were expected to make reputational bets, we would gradually see which ones were able to make decent judgments about reality and which were not. I suspect we would see little correlation between academic prestige and empirical insight. The ultimate result, sorely needed, would be a lowering of prestige for academics who exhibited little empirical insight and improved prestige for individuals who did – regardless of credentials.

    • IMASBA

      “If more professors were expected to make reputational bets, we would gradually see which ones were able to make decent judgments about reality and which were not.”

      I doubt that, I’d say that averaged over many prediction markets few, if any, people would emerge as prediction champions. Sure, we might see someone get it right three times in a row and we might be in awe of that but it really would not be statistically significant. Problem is human lifespan may very well be too short to get statistically significant results about a single person’s prediction skills, so human nature takes over and we’ll champion those who get it right a couple of times in a row and we’ll only find out our mistake when policy based on this wishful championing starts leading to bad things.

      • Michael Strong

        “I doubt that, I’d say that averaged over many prediction markets few, if any, people would emerge as prediction champions.”

        Warren Buffett is a recognized “prediction champion.” I’d say there is non-trivial evidence that Paul Ehrlich is poor at making predictions about natural resource shortages.

        The salient question is, “Would information about the patterns of prediction made by an individual add to our knowledge of that individual’s judgment?” At present, as a society we tend to use academic reputation as a proxy measure for “good judgment,” at least in the relevant academic field. As individuals we might also use our personal evaluations of particular combinations of evidence and reasoning to evaluate the quality of an individual’s judgment.

        Given the limitations of both of those approaches, how can it not improve our understanding of the quality of individual judgment to have records of a particular individual’s predictions and the outcomes of those predictions?

        Moreover, the entire point of a properly structured prediction market is that the “wisdom of crowds” tends to provide us with better information than would result from individual predictions, so that as a society we would more likely use the overall prediction market outcomes than the decisions made by particular individuals. Even if the result was a general decrease in respect for academic expertise, combined with a greater respect for market predictions, that would be a positive outcome.

        Thus we have no need to fear, “policy based on this wishful championing starts leading to bad things.”

        And anytime an intellectual wanted to claim that the market outcomes are consistently worse than his or her own, then we can simply ask the intellectual to prove it – demonstrate that he or she can consistently outperform market-based predictions.

        Given the sustained enthusiasm that intellectuals had for communism over a seventy year period, despite repeated mass famines and police states, it really would be hard to do worse.

      • IAMSBA

        “Warren Buffett is a recognized “prediction champion.” ”

        How do we know that? Because he got rich? To distinguish between a game of chance and a game of skill you need to repeat the game many times. Major investment deals are too rare to repeat enough times during a human lifetime to get statistically significant results (it gets even worse when you realize many advisors are middle-aged at most). Resolutions of prediction markets will be equally rare.

        “I’d say there is non-trivial evidence that Paul Ehrlich is poor at making predictions about natural resource shortages.”

        Keeping a record can point out idiots, but that does not prove it can point out champions. But really, how do we know Ehrlich wasn’t right on a host of other subjects (they’re just subjects that weren’t so influential in history).

        “Moreover, the entire point of a properly structured prediction market is that the “wisdom of crowds” tends to provide us with better information than would result from individual predictions,

        Thus we have no need to fear, “policy based on this wishful championing starts leading to bad things.” ”

        If we base decisions on the wisdom of the crowd we expose ourselves to market manipulation aimed at making us lean towards a certain side. If we base decisions on people who beat the market we might very well be listening to lucky idiots. Maybe we’re better off getting used to the inherent fuzzyness of the social “sciences” and we should stop looking for crystal balls that do not exist, just like we don’t take a meteorologist serious when he says he can predict the weather of august 18, 2050.

  • Pingback: Overcoming Bias : Why Do Bets Look Bad?