During my graduate studies (’93-97), I looked at the history of prizes in science. I learned that from ~1600-1800, prizes funded science lots, and much more than did grants. But ~1830, science elites controlling top scientific societies in both Britain and France defrauded donors to switch funding to grants, which were then directed by society insiders to be given mostly to insiders. Thereafter such societies insisted that donors must fund grants, not prizes, if they wanted their donations to gain prestigious scientific society associations.
Later, ~1900, tenure became common in academia. Then ~1940, peer review became common in publications, and ~1960 in grants. Also about midcentury, journalism switched from its usual mode of questioning and investigating claims made to it, to accepting whatever academics said and trying to “communicate” that to the public. In ~1980s, college rating systems became widely available to the US public, ratings which depended mainly how how elite academics rated those colleges.
All of these changes were ways in which academic elites wrested control of academia from outsiders who previously imposed some degree of incentives and accountability. The elites of most any profession would love to fully control it, being given resources to spend at their discretion, with little need to accommodate demands of customers, investors, regulators, or anyone else. But academic managed to achieve this ideal far more than most, due to its peak prestige. Via elite schools, academics control prestige in many other areas of life.
I review this history to make clear just what academic reformers are up against. It is far from sufficient to enumerate academic failures; you’ll have to develop concrete alternatives that can win prestige fights against the usual academics. History has long been moving against you; you’ll have to somehow reverse that strong tide.
In my 40+ years of thinking about how we might reform academia, I’ve considered many different parties as potential allies in this venture. First I and other hypertext publishing fans hoped to use backlinks to make criticism of claims easy to find from those claims, thus recruiting critics and honest readers into our reform venture. But we’ve now achieved that ease of finding criticism, without much impact. Readers care far more about publication prestige than about which criticisms are persuasive to those who read them with care.
Second, I saw the public as an ally willing to bet lots on science and related policy questions. However, we’ve seen that if academics choose to ignore such bets, the public isn’t much interested in them either. And laws continue to block such bets.
Third, I saw research patrons as allies. Surely they’d want to fund research in ways more likely to induce intellectual progress, if only they understood the better ways. Like prizes instead of grants. But then I learned about the history of academia that I summarize above. No, patrons used to use better methods, but caved when academic threatened to take away their prestige by association. Patrons care more about such prestige with academics than they do intellectual progress.
Fourth, I hoped journal editors might be allies. But when we showed that polls and prediction markets could predict which papers wouldn’t replicate, and tried to get journal editors to publicly declare that they’d consider such predictions as part of their article approval process, all the journals refused. Journals are happy to publish sexy papers that don’t replicate.
Now my best hope is to recruit as allies future folk willing to give honest appraisals of their distant past. One key claim that elite academics are not willing to give up on is this:
The people that academics now most celebrate with jobs, funding, publication, and publicity are in fact the people today who future folks centuries later, carefully considering the question, are most likely to identify as the people today who should have been listened to most for the purpose of speeding intellectual progress.
Just as we can sometimes we can get auditors, judges, juries, and even journalists to give honest independent appraisals of others’ acts and accomplishments, there’s a decent chance that we can find a ways to fund “historians” (who might not be professional credentialed as such) to look back carefully at particular areas of research, and rank past researchers in terms of who should have been listened to more. And compound interest over centuries should let us spend lots then on such evaluation, when funded today by only small amounts set aside.
The first order of business in this reform effort is to actually fund such efforts to rank researchers from centuries ago, to show that we can in fact robustly enough rank them now. Once we show that diverse approaches give substantially correlated answers, we can search for approaches whose expected results are the most correlated with others, at the lowest cost. (The best way might randomize over many methods.)
Once we have demonstrated such a capacity, we could create markets today on assets that pay off proportional to (some monotonic transform of) rankings of current researchers. (Such assets should be built out of assets that accumulate long term value, such as stock index funds.) And we could make markets in such assets conditional on such evaluations being done centuries later on their areas of research. This approach would thus be robust to the fraction of such areas later evaluated. It might be most all such areas, or just a few, depending on how much funding becomes available, how far in the future evaluations are done, and how cheap they end up being.
With market estimates of future rankings of current researchers, we could then highlight contrasts between such rankings and the people who academic elites now choose to celebrate, via jobs, funding, publications, etc. Such elites would then have to either (A), dismiss such markets as ignorant, (B) change their choices to better align with market estimates, or (C) trade in these markets to make market estimates better match their non-market choices.
Results (B) or (C) would give academics stronger incentives for, and thus rates of, intellectual progress. Academic institutions could then use such market prices as outcomes for futarchy-governance methods to choose jobs, grants, publications, etc.
To discourage (A), I’d start this approach in a few limited research areas, where I’d pick shorter term evaluation periods, and subsidize the markets enough to induce informative prices, which we’d then show to be informative at the end of those evaluation periods. Once prices were taken somewhat seriously in such areas, we’d switch to longer term evaluation periods. And then researcher efforts to manipulate their own prices could provide sufficient subsidies, allowing extra subsidies to be moved to new areas, to repeat this process. And hopefully with success, we’d attract more sources of subsidies.
And that’s my current best vision for reforming academia. Academia is one of the hardest social spheres to reform, given its peak prestige, and none of the groups you might hope to make your allies here can actually be counted on. So my best ally hope is future “historians” looking back to say who should have most been listened to for the intellectual progress that was actually achieved.
I don't think the problem with any of the failed approaches you mentioned was the nature of the approach.
And I don't think your current proposed alternative is any likelier to succeed.
I think you're missing some intuitions about momentum and marketing. ANY approach, to solving ANY problem, will only get "off the ground" in a real way if resources are devoted to hammering home the message, repeating it, simplifying it, promoting it, over and over and over and over and over again, from multiple voices, gradually climbing the prestige ladder. That is what it takes to make something "a thing".
Then, when you have a Thing, it is time to evaluate whether the approach is fundamentally or structurally flawed, whether it is incentive-compatible, etc etc. And flawed Things tend to fail.
But almost none of your ideas are even Things. The only one that has gained momentum is prediction markets, and that took *decades* and the rise of a true prediction-market "community" and sustained funding and multiple companies and so on. And, as you often point out, the implementation of these successful prediction markets is often flawed and doesn't follow the details you've written up earlier. THAT IS HOW IT GOES. it takes 30 years for a two-word phrase to gradually become "mainstream" enough that people in a medium-sized subculture will have heard of it.
"Science reform" is a Thing today, just not a successful thing.
Specific science-reform policies mostly aren't big enough to be Things.
Now, you don't just want an idea to be "a Thing", you want it to be strong enough to *actually win*. That is extremely ambitious. I would like to believe that this is possible but to be honest I have little faith that humans can actually set out to do something on the scale of "make academia accountable" and have the overall arc play out in a way similar to what they intend. But you are unusually good at gaming out the incentive-compatibility of ideas. Perhaps, if any of them became Things, they would then succeed. But they're *not.*
I don't feel, reading this, that there's enough "oomf" in historical ranking that it would connect to forward-looking reforms. This is policy; you are trying to get people on board with changing their behavior; it's not clear to me how developing any new information source would do so, if you don't think information moves people in general...
It's too abstract. You're not *pitching* me. You're not reassuring me that it would totally work, that People Are Making It Happen, that all the what-if uncertainties that come to mind can be dispelled and we are marching along the royal road to Totally Winning Forever. You are not doing *even enough of this to convince me to spend a tiny amount of my own cash on the idea*, let alone connecting with anybody more mainstream.
Now ^ is a confident and un-caveated version of my intuition; it could be that you regularly move in circles where the emotional gap doesn't matter.
But if you suspect I'm right I think you really need to talk to some kind of coach on marketing and really *learn* how to do emotional appeal.
It's not a solution in all areas, but for experimental academic research it seems like improving replication norms would help a lot. And that's something that's already the norm in most physical sciences, to at least some degree.
One way to strongly enforce replication is to require any new finding to also have a replication study performed independently before it was published, with the replication study published at the same time. However, this could significantly slow down research in some areas, and I'm not sure that's a trade off we should make.