Reputation Commitment Mechanism

Consider a group of experts who are certain that fact X is true.  This fact could be something such as global warming is occurring and is caused by human activity.  They sign a statement saying X is true.  If the experts turn out to be wrong, however, they will probably pay at most only a small reputational cost.  This low error cost reduces the authority value of the expert’s statement.  I propose a reputation commitment mechanism that would increase the personal cost to the experts of being wrong and so increase the authority value of their statement.

 

The experts could sign a document which reads: “We are extremely certain that X is true.  In fact, we are so certain that X is true that if it turns out that X is not true it must mean that we who have signed this document have defective reasoning abilities and so are unfit to publicly comment on any issue of importance.  If X turns out to be false we pledge not to claim that we were wrong because we were deceived, misled, or unlucky but rather we will admit that we were wrong because we lacked the intelligence to understand the issue.”

This reputation commitment mechanism would only work when experts were near 100% certain about something.  It couldn’t be used by experts making probabilistic predictions.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Well, there’s a plus side and a minus side to this policy.

    On one side, it would quickly ruin a lot of famous professionals who are quite competent in their chosen fields but who have no concept of the difficulty in calibrating extreme probabilities.

    On the minus side, it would lead to public decisionmaking being dominated by overconfident fools, because they are the only ones who would sign such a document for presentation to Congress. This would be significantly different from the present situation, in ways that are bound to occur to me eventually.

  • http://openandwilling.blogspot.com DavidD

    This is not new. Ancient astrologers and other soothsayers faced being thrown off a cliff if they were wrong.

    When it comes to the chance that global warming is mostly due to human activity, some experts are on record as estimating that at 90%. I’m not sure anyone who says the chance is negligible deserves to be called an expert, but maybe you want to hold them to their opinion, too. Now if you hold a gun to their head about that, how long are you going to hold it? What criteria are you going to use to say that they are right or wrong? If such criteria won’t be reached in the lifetime of the experts, it doesn’t make any difference. If you shoot them prematurely or have no intention of shooting anyone, what have you gained?

    If I were a benevolent dictator I’m sure I’d do 100 other things to regulate opinions, such as restricting free speech, before I put more pressure on experts to be right. Too much pressure turns people into yes-men. I’m not sure there’s an issue where the range of opinion among experts speaking within their field of expertise is so broad that I feel a need to fix the current system of scientific expertise. It’s so much non-expert opinion that is frustrating. How many of them can we throw off a cliff?

  • Dagon

    This is effectively a binary prediction market, with the only bet being “all my reputation”. I can’t see any advantage over monetary wagers.

    This doesn’t address the bigger problem with such markets, which is that “experts” simply don’t make statements that can be resolved as fact with no wiggle room. The average temperature in 2050 is measurable, but the causality isn’t, and more important the effects of proposed action aren’t.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    They could post a monetary bond then would forfeit if they were wrong. Losing bond could go into a fund to help publicize the claims of people who made such claims. This might be an incentive to make such claims, and perhaps still avoid anti-gambling laws.

  • Carl Shulman

    A major problem with monetary bets to discipline experts is that the wealth derived from making poor predictions can dwarf the costs of the bet (both financial and reputational) when the time to collect comes around. Consider Paul Ehrlich (http://en.wikipedia.org/wiki/Paul_R._Ehrlich), who made millions and earned numerous accolades for making such fabulous predictions as:

    “the battle to feed all of humanity is over… In the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.”

    “India couldn’t possibly feed two hundred million more people by 1980,”

    “I have yet to meet anyone familiar with the situation who thinks that India will be self-sufficient in food by 1971.”

    Losing his classic bet with Julian Simon (http://en.wikipedia.org/wiki/Simon-Ehrlich_wager) and being consistently mistaken did very little to reduce his fame or the flow of professional rewards throughout the 1990s.

  • Paul Gowder

    Carl: perhaps lousy predictions have an entertainment value aside from their truth value?

  • Stuart Armstrong

    If we trully want to encourage better experts, we need to get a feedback mechanism. We should encorage experts to make definite, falsifiable conclusions, but we shouldn’t punish them for doing so when they go wrong, rather we need to get them to integrate their new knowledge into their assessement.

    Maybe a method for punishing those who make the same mistake again, and rewarding those that change and improve? A double or nothing betting system, maybe?

  • Tom Crispin

    Adding monetary bets to politics seems a poor solution compared to real markets. Doom and gloom is an entertainment industry, treat it as such.

  • EthanJ

    Stuart – you can’t pay the hedgehog to become a fox. Like the scorpion and the frog, hedgehogging is in his nature. Better to identify the foxes and fire the hedgehogs.

  • Stanford

    I have an idea for a game. “Detecting Bias at Overcoming Bias”. Is there a bias when people think they are insusceptible(?) to bias? I predict the same amount of bias as before, only different biases.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Stanford, would you say the same thing about avoiding cruelty, that if we try to stop being cruel in one context another cruelty will slip out, so we might as well be just as cruel as we feel like?

  • Stanford

    No I would not compare cruelty to bias. We are not programmed for cruelty. And I didn’t mean to imply anything about the site. I visit several times a day. It’s just an observation, really…

  • http://rafefurst.wordpress.com/ Rafe Furst

    LongBets.org is pretty close to this desired reputation commitment mechanism. My problem is that it’s limited in who will use it. More interesting to me would be a system where we can rate someone’s reputation regardless of whether they opt in to the game or not. Better still would be one where they don’t even have to know they are being rated.

  • https://launchpad.net/~themusicgod1 themusicgod1

    How many people, experts even, would have said confidently that man would have never make it to the moon on a rocket before the invention of flight?  Discover the atomic bomb before the Michelson-Morley experiment?  And yet there must have been physicists (like Margrete Heiberg Bose) who lived through both the original Michelson-Morley experiment all the way through to nuclear weapons, and others (like Erwin Richard Fues) through the original Wright Flyer and the 1967 moon launch.  Revolutions happen.

    There may be some middleground ways around this wherein very confident claims can be expressed, such as a prediction market(as mentioned below), or staking ahead of time the kinds of things which would have to be true in order for one to be wrong(kind of suggested below), or perhaps a resolution mechanism along the lines of ‘are you still sure’ being asked every X years…but it seems like each of these things can be gamed.  On the latter suggestion, the time may be field and fact dependent (or in general — these kinds of ‘certain’ facts need a time bound, “X is true, and I’m certain that nothing in the next 10 years will change my belief in X’s truth value”), or completely defined by the expert.

    One thing we gain by this is a measurable quantity of how much of an expert one is — by the broadness of predictions and the length one is confident of them, and the rate of change of these two quantities relative to other so-called experts.

  • Wade Lahoda

    “It couldn’t be used by experts making probabilistic predictions.”

    Well, that rules out almost all useful statements by experts.

    Experts are willing to consider they might be wrong and know that just because they’re drawing conclusions on the best evidence available doesn’t mean there isn’t better evidence waiting to be found.

    In short, looking for people who are willing to say “I’m 100% this is true” about any non-trivial issue is a good way to track down the amateurs. Experts will hedge their bets.