Academia functions to (A) create and confer prestige to associated researchers, students, firms, cities, and nations, (B) preserve and teach what we know on many general abstract topics, and (C) add to what we know over the long run. (Here “know” includes topics where we are uncertain, and practices we can’t express declaratively.)
Most of us see (C) as academia’s most important social function, and many of us see lots of room for improvement there. Alas, while we have identified many plausible ways to improve this (C) function, academia has known about these for decades, and has done little. The problem seems less a lack of knowledge, and more a lack of incentives.
You might think the key is to convince the patrons who fund academia to change their funding methods, and to make funding contingent on adopting other fixes. After all, this should induce more of the (C) that we presume that patrons seek. Problem is, just like all the other parties involved, patron motives also focus on more on function (A) than on (C). That is, state, firm, and philanthropic patrons of academia mainly seek to buy what academia’s other customers, e.g., students and media, also buy: (A) prestige by association with credentialed impressiveness.
Thus offering better ways to fund (C) doesn’t help much. In fact, history actually moved in the other direction. From 1600 to 1800, science was mainly funded via prizes and infrastructure support. But then prestigious scientific societies pushed to replace prizes with grants. Grants give scientists more discretion, but are worse for (C). Scientists won, however; now grants are standard, and prizes rare.
But I still see a possible route to reform here, based on the fact that academics usually deny that their prestige is arbitrary, to be respected only because others respect it. Academics instead usually justify their prestige in function (A) as proxies for the ends of function (B,C). That is, academics tend to say that your best way to promote the preservation, teaching, and increase of our abstract knowledge is to just support academics according to their current academic prestige.
Today, academic prestige of individuals is largely estimated informally by gossip, based on the perceived prestiges of particular topics, institutions, journals, funding sources, conferences, etc. And such gossip estimates the prestige of each of these other things similarly, based on the prestige of their associations. This whole process takes an enormous amount of time and energy, but even so it attends far more to getting everyone to agree on prestige estimates, than to whether those estimates are really deserved.
Academics typically say that such sacred an end as intellectual progress is so hard to predict or control that it is arrogant of people like you to think you can see how to promote such things in any other way than to just give your money to the academics designated as prestigious by to this process, and let them decide what to do with it. And most of us have in fact accepted this story, as this is in fact what we mostly do.
Thus one way that we could hope to challenge the current academic equilibrium is to create better clearly-visible estimates of who or what contributes how much to these sacred ends. If academics came to accept another metric as offering more accurate estimates than what they now get from existing prestige processes, then that should pressure them into adjusting their prestige ratings to better match these new estimates. Which should then result in their assigning publications, jobs, grants etc. in ways that better promote such ends. Which should thus improve intellectual progress, perhaps by large amounts.
And as I outlined in my last post, we could actually create such new better estimates of who deserves academic prestige, via creating complex impact futures markets. Pay distant future historians (e.g., in a century or two) to judge then which of our academic projects (e.g., papers) actually better achieved adding to what we know. (Or also achieved preserving and teaching what we know.) Also create betting markets today that estimate those future judgments, and suggest to today’s academics and their customers that these are our best estimates of who and what deserve academic prestige. (Citations being lognormal suggests this system’s key assumptions are a decent approximation.)
These market prices would no doubt correlate greatly with the usual academic prestige ratings, but any substantial persistent deviations would raise a question: if, in assigning jobs, publications, grants, etc., you academics think you know better than these markets prices who is most likely to deserve academic prestige, why aren’t you or your many devoted fans trading in those markets to make the profits you think you see? If such folks were in fact trading heavily, but were resisted by outsiders with contrary strong opinions, that would look better than if they weren’t even bothering to trade on their supposed superior insight.
Academics seeking higher market estimates about they and their projects would be tempted to trade to push up those prices, even though their private info didn’t justify such a move. Other traders would expect this, and push prices back down. These forces would create liquidity in these markets, and subsidize trading overall.
Via this approach, we might reform academia to better achieve intellectual progress. So who wants to make this happen?
Given the evidence that prediction markets are 70-75% accurate at predicting scientific replication in the fields examined, I wonder if in the short term, this would incentivize scientists to engage in specific kinds of questionable research practices to make their studies appear like other highly-priced studies. With how common QRPs are (many surveys about this, good table summarizing them in the supplement of this article), I'm sure scientists would quickly identify which factors lead to higher prices and p-hack/HARK/remove data/use other tactics to increase the price of their work, at least in the short term. With how rarely studies are replicated, there's no way to be confident about what the distribution of longevity for false positives findings is, and this could lead to false positive research programs being highly-priced for long periods of time, with subsequent evolutions/offshoots (SSC post about a serotonin transporter mutation with 1,400+ studies claiming it's related to depression and other neuropsychiatric conditions, only to be found useless in a high-powered sample). Given how breakthroughs enable one another, any predictions made about scientists/research programs later in time than the next scientific breakthrough will suffer proportionally to how that breakthrough impacts how science is conducted and which research programs follow it. More optimistically, this market might incentivize the adoption of registered reports, open data/code, and other scientific practices that are well-understood to boost Robin's (C) option, which would be a very good thing.
Tbf I don't see academia outside the church lol. It seems like an alien concept and plays out like it too. Without an overarching goal it sorta devolves into workplace politics.