4 Comments

It sounds like a weighted average of the log-odds of your participants' estimates is in essence trying to find "the estimate an average expert unbiased researcher would give" by cancelling the noise and boosting the expert judgments. This tracks with your "we need good researchers" results, and also hints that maybe there's something better to do that tries to identify the unique evidence and add it back in rather than just aggregating the common evidence.

Expand full comment

That was from an unpublished paper by the Good Judgement team, presented by Lyle Ungar. I don't have the details.

Expand full comment

> ACE has also learned how to make people better at estimating, both by training them in basic probability theory

What sort of training?  (This is highly relevant to the Center for Applied Rationality trying to design units.)

Expand full comment

An idea: We could make a web application for finding smart crowds. It could be like a prediction market, but instead of money people would get "smart points". People with highest number of "smart points" would be members of the "smart crowd".

Without money involved it would need special rules to prevent people from creating sockpuppet accounts, entering stupid data, and then correcting them with their official account. For example people would get less points for winning over less smarter people.

With a good marketing, this web application could be popular and many peple would join it. It could be connected to facebook (it could show you the smartest people among your friends) etc. If the questions are only entered by organizers, they could make some money by letting companies ask their own questions. Then some of that money could be distributed to people who got the answers right. Which would make the application more popular, etc.

Expand full comment