17 Comments

You might be interested in some work by Glenn Shafer out of Rutgers. He has a paper about using the language of betting to replace p-values in scientific communication here: http://www.probabilityandfi.... I am tempted to summarize it as "write papers as though replication markets already existed."

This is related to other work he has done with Vladimir Vovk, where they have developed game-theoretic probability. The core concept is that probability arises naturally out of perfect information games between three players, where one player offers bets, another accepts them, and a third decides the outcome. They maintain a website here: http://www.probabilityandfi... with their working papers, and there is a new book due out this month.

Expand full comment

Well that's indeed nice of them. I wish you success when this is actually launched.

Expand full comment

It's not in KeyW's usual line of business, but happily they let me pursue it anyway, so we will be the program lead for the Science Prediction Market Project team. The public website will appear sometime after program kickoff. We released this early to gather some info to discuss at kickoff.

Expand full comment

We have a platform, and will say more when we are allowed to do so.

Expand full comment

Will be great to see how this goes! Not sure what platform you plan to use to amass all these predictions but if Metaculus software might be of use let me know.

Expand full comment

Weird. There's nothing at replicationmarkets.com, and the domain seems to belong (since 2018-09-25) to the KeyW Corporation, an Intelligence / cyber security thing.

Expand full comment

The idea is to use chance of replication as a clue *in addition* to your other clues. There's no requirement that you only approve papers with high estimates, or that you treat this estimate as a good sign all across the chance spectrum. If we get enough data we can check how editors are actually treating it.

Yes of course papers are high dimensional things, as are their replications. So yes there are many judgement calls to be made in how to project that down to a small number of most informative dimensions.

Expand full comment

Speaking as an editor of a social science journal, I don't see a higher probability of replicability as a monotonically good thing, either for original work or for replications. For original work, a claim that everyone is sure will replicate is unlikely to be a big contribution. Occasionally authors who tackle a novel issue do an incredibly thorough job of establishing it, but most novel papers leave a lot of open questions that are left for future work. I would rather publish a paper tackling ambitious claims with a 75% chance of replication than one tackling claims fairly obvious from the extant literature with a 95% chance of replication.

I'm even less interested in high probabilities of replication when it comes to publishing replications themselves, or more precisely what our journal is calling pre-registered re-examination (p-rex) proposals. One of our key criteria is that the replication should be "informative", which means that there is substantial uncertainty about whether the claim will survive re-examination. Otherwise, why bother?

Ultimately, we want a literature filled with replicated and highly replicable claims. But I don't think that means we want to demand that each original paper and re-examination be this way before publication.

As a related issue, I find it a bit odd that you talk about "experiments" as replicating, rather than individual claims. A single experiment typically generates many claims about statistical association. There might be a strong main effect I think will be replicable, and an interaction or process measure that indicates why the main effect arises, and is far more tentative, but a step toward better understanding.

Expand full comment

Asserting that you're going to require something and then not actually requiring it seems to me to be a costly signal of something else, maybe the opposite.

Expand full comment

I'd think that American Journal of Political Science, Cognition, and any of the experimental journals among the 156 listed here https://cos.io/rr/ might be good candidates -- but I can't say with any confidence that any one of them will be interested. Just that they seem amenable to open science in general and that they've in some ways linked their reputations to replicability.

Expand full comment

If you have a specific list of journals to contact, that would be useful. But the main issue will be their willingness, and once they've heard about us they can most easily determine that.

Expand full comment

Hi Robin, this seems like a really cool project!

For finding journal partners, you might actively seek out those that have already strongly signaled a commitment to open science. Some costly signals (in the sense of placing additional burdens on authors) that come to mind:1) requiring pre-analysis plans (some or all AEA journals?)

2) Requiring open code and data (some AEA journals, the journal Cognition (report on Cognition's experiences here: https://royalsocietypublish... )

3) Actually taking the time to verify that code are, at a minimum, computationally reproducible, as the AJPS does in partnership with the Odum institute https://ajps.org/ajps-repli...

Another avenue would be to look for journals that select for open-science minded folks, for instance, those that encourage the submission of registered reports https://www.nature.com/arti...

I work on this stuff professionally and would be happy to discuss further over email.

Expand full comment

Not very confident. Hopefully this test will tell us a lot about those key questions soon.

Expand full comment

Congrats! I hope this ends up moving things in a good direction. At first blush it seems that it may have the potential to do so.

Expand full comment

How confident are you that people in charge of academic journals truly want to "increase the rate at which their published articles replicate" and would be willing to perform additional work to actually make this happen? How confident are you that people in charge of academic journals, especially those in the "social and behavioral sciences," truly regard these "low rates of replication" to be a "problem" that needs to be solved? That is, what if "X isn't about Y" here (if I may borrow a phrase)?

Expand full comment

We aren't allowed to say yet publicly.

Expand full comment