Academia functions to (A) create and confer prestige to associated researchers, students, firms, cities, and nations, (B) preserve and teach what we know on many general abstract topics, and (C) add to what we know over the long run. (Here “know” includes topics where we are uncertain, and practices we can’t express declaratively.)
Given the evidence that prediction markets are 70-75% accurate at predicting scientific replication in the fields examined, I wonder if in the short term, this would incentivize scientists to engage in specific kinds of questionable research practices to make their studies appear like other highly-priced studies. With how common QRPs are (many surveys about this, good table summarizing them in the supplement of this article), I'm sure scientists would quickly identify which factors lead to higher prices and p-hack/HARK/remove data/use other tactics to increase the price of their work, at least in the short term. With how rarely studies are replicated, there's no way to be confident about what the distribution of longevity for false positives findings is, and this could lead to false positive research programs being highly-priced for long periods of time, with subsequent evolutions/offshoots (SSC post about a serotonin transporter mutation with 1,400+ studies claiming it's related to depression and other neuropsychiatric conditions, only to be found useless in a high-powered sample). Given how breakthroughs enable one another, any predictions made about scientists/research programs later in time than the next scientific breakthrough will suffer proportionally to how that breakthrough impacts how science is conducted and which research programs follow it. More optimistically, this market might incentivize the adoption of registered reports, open data/code, and other scientific practices that are well-understood to boost Robin's (C) option, which would be a very good thing.
A problem that would need to be solved is the ambiguity of "contribution." What does it actually mean? Suppose important, useful result A depends on prior results B and C, and could not have been done without both B and C. Further suppose that result B was very difficult to show, taking a lot of time, people, and money, and result C was relatively easy to show. Can we say anything about the contribution of B and C to A? Were they equally important? Was B more important? By how much? What standard could we apply?
What is lacking is an ideal, mathematical model of contribution assignment.
More Academic Prestige Futures
Given the evidence that prediction markets are 70-75% accurate at predicting scientific replication in the fields examined, I wonder if in the short term, this would incentivize scientists to engage in specific kinds of questionable research practices to make their studies appear like other highly-priced studies. With how common QRPs are (many surveys about this, good table summarizing them in the supplement of this article), I'm sure scientists would quickly identify which factors lead to higher prices and p-hack/HARK/remove data/use other tactics to increase the price of their work, at least in the short term. With how rarely studies are replicated, there's no way to be confident about what the distribution of longevity for false positives findings is, and this could lead to false positive research programs being highly-priced for long periods of time, with subsequent evolutions/offshoots (SSC post about a serotonin transporter mutation with 1,400+ studies claiming it's related to depression and other neuropsychiatric conditions, only to be found useless in a high-powered sample). Given how breakthroughs enable one another, any predictions made about scientists/research programs later in time than the next scientific breakthrough will suffer proportionally to how that breakthrough impacts how science is conducted and which research programs follow it. More optimistically, this market might incentivize the adoption of registered reports, open data/code, and other scientific practices that are well-understood to boost Robin's (C) option, which would be a very good thing.
This problem already exists now. So as long as my proposed changes doesn't make this problem worse, it might still improve on the status quo.
A problem that would need to be solved is the ambiguity of "contribution." What does it actually mean? Suppose important, useful result A depends on prior results B and C, and could not have been done without both B and C. Further suppose that result B was very difficult to show, taking a lot of time, people, and money, and result C was relatively easy to show. Can we say anything about the contribution of B and C to A? Were they equally important? Was B more important? By how much? What standard could we apply?
What is lacking is an ideal, mathematical model of contribution assignment.
What makes you think academia wants to hand out jobs meritocratically, using cold objective market forces?
https://www.nber.org/papers...
Currently, women are 3-15 times more likely to be selected as members of the AAAS and NAS than men with similar publication and citation records