Consider a group of experts who are certain that fact X is true. This fact could be something such as global warming is occurring and is caused by human activity. They sign a statement saying X is true. If the experts turn out to be wrong, however, they will probably pay at most only a small reputational cost. This low error cost reduces the authority value of the expert’s statement. I propose a reputation commitment mechanism that would increase the personal cost to the experts of being wrong and so increase the authority value of their statement.
The experts could sign a document which reads: “We are extremely certain that X is true. In fact, we are so certain that X is true that if it turns out that X is not true it must mean that we who have signed this document have defective reasoning abilities and so are unfit to publicly comment on any issue of importance. If X turns out to be false we pledge not to claim that we were wrong because we were deceived, misled, or unlucky but rather we will admit that we were wrong because we lacked the intelligence to understand the issue.”
This reputation commitment mechanism would only work when experts were near 100% certain about something. It couldn’t be used by experts making probabilistic predictions.
"It couldn’t be used by experts making probabilistic predictions."
Well, that rules out almost all useful statements by experts.
Experts are willing to consider they might be wrong and know that just because they're drawing conclusions on the best evidence available doesn't mean there isn't better evidence waiting to be found.
In short, looking for people who are willing to say "I'm 100% this is true" about any non-trivial issue is a good way to track down the amateurs. Experts will hedge their bets.
How many people, experts even, would have said confidently that man would have never make it to the moon on a rocket before the invention of flight? Discover the atomic bomb before the Michelson-Morley experiment? And yet there must have been physicists (like Margrete Heiberg Bose) who lived through both the original Michelson-Morley experiment all the way through to nuclear weapons, and others (like Erwin Richard Fues) through the original Wright Flyer and the 1967 moon launch. Revolutions happen.
There may be some middleground ways around this wherein very confident claims can be expressed, such as a prediction market(as mentioned below), or staking ahead of time the kinds of things which would have to be true in order for one to be wrong(kind of suggested below), or perhaps a resolution mechanism along the lines of 'are you still sure' being asked every X years...but it seems like each of these things can be gamed. On the latter suggestion, the time may be field and fact dependent (or in general -- these kinds of 'certain' facts need a time bound, "X is true, and I'm certain that nothing in the next 10 years will change my belief in X's truth value"), or completely defined by the expert.
One thing we gain by this is a measurable quantity of how much of an expert one is -- by the broadness of predictions and the length one is confident of them, and the rate of change of these two quantities relative to other so-called experts.