Code isn't really the limiting factor, but the firms that write this software are small w/ few resources, so doing that well hasn't risen to a high priority.
I understand that this is far from the biggest hurdle, but are there good phone apps for quickly creating prediction markets and allowing participants to bet with minimal fuss? There are a lot of people who might be willing to code one.
I am afraid that setting up those markets will corrupt the replications. When there a lot of money at stake for a certain result not replicating, a researcher might sabotage his own experiment to stop replicating.
How are traders in the prediction market supposed to evaluate the reproducibility of papers that haven't yet been published?
It seems there's a chicken-and-egg problem here.
Authors can't publish until the prediction market approves, and the prediction market can't approve something it hasn't seen.
Or am I missing something? Is publication in a "top psychology journal" intended as sign of prestige/approval/kudos/credit to the authors, instead of primarily distribution of the results? [Actual publication would be via unfiltered online preprint?]
Or do you intend that the prediction market would evaluate reproducibility solely based on the claimed conclusions, without seeing the methodology and results?
Alas even though the manipulation concern has been dealt with in great detail, everyone reinvents the concern as if it were new, not bothering to look at or respond to the literature: http://www.overcomingbias.c...
If there's real money on the line, couldn't this lead to certain replication attempts being fixed, like fights are sometimes fixed? It could be pretty easy to hide.
My guess is that the market would be so underpopulated that authors could always bet on replication with high odds and get their paper published. Maybe they would lose a bit of money but it's likely worth it to get a publication out.
It might be a good thing to do for high-profile studies getting a lot of press coverage (post-publication). Whenever that happens there's always a lot of commentary about why the study is amazing/terrible, a prediction market would be a good way to quantify general opinion and sift through some of that noise.
Code isn't really the limiting factor, but the firms that write this software are small w/ few resources, so doing that well hasn't risen to a high priority.
I understand that this is far from the biggest hurdle, but are there good phone apps for quickly creating prediction markets and allowing participants to bet with minimal fuss? There are a lot of people who might be willing to code one.
How much money do you think there would need to be involved for it to corrupt researchers?
How much money do you think should be involved in such a market to provide reasonable predictions?
There's not remotely a chance of that much money being at stake in these markets.
I am afraid that setting up those markets will corrupt the replications. When there a lot of money at stake for a certain result not replicating, a researcher might sabotage his own experiment to stop replicating.
In many fields papers already typically appear as "working papers" before being published in journals.
So then "publication" in a journal is not really "publication" anymore - it's reprinting something that was already public.
Which is fine if you think of journals as selected "Reader's Digests" of the "best" papers in a field.
But perhaps a better reform would be just to have a reputation mechanism (perhaps driven by the prediction market) that votes up the "best" papers.
Without having traditional journals at all.
The submitted papers can be made public for bettors to evaluate.
How are traders in the prediction market supposed to evaluate the reproducibility of papers that haven't yet been published?
It seems there's a chicken-and-egg problem here.
Authors can't publish until the prediction market approves, and the prediction market can't approve something it hasn't seen.
Or am I missing something? Is publication in a "top psychology journal" intended as sign of prestige/approval/kudos/credit to the authors, instead of primarily distribution of the results? [Actual publication would be via unfiltered online preprint?]
Or do you intend that the prediction market would evaluate reproducibility solely based on the claimed conclusions, without seeing the methodology and results?
Alas even though the manipulation concern has been dealt with in great detail, everyone reinvents the concern as if it were new, not bothering to look at or respond to the literature: http://www.overcomingbias.c...
If there's real money on the line, couldn't this lead to certain replication attempts being fixed, like fights are sometimes fixed? It could be pretty easy to hide.
My guess is that the market would be so underpopulated that authors could always bet on replication with high odds and get their paper published. Maybe they would lose a bit of money but it's likely worth it to get a publication out.
It might be a good thing to do for high-profile studies getting a lot of press coverage (post-publication). Whenever that happens there's always a lot of commentary about why the study is amazing/terrible, a prediction market would be a good way to quantify general opinion and sift through some of that noise.