Response To Hossenfelder
In my last post I said:
In her new book Lost in Math, theoretical physicist Sabine Hossenfelder describes just how bad things have become. … To fix these problems, Hossenfelder proposes that theoretical physicists learn about and prevent biases, promote criticism, have clearer rules, prefer longer job tenures, allow more specialization and changes of fields, and pay peer reviewers. Alas, as noted in a Science review, Hossenfelder’s proposed solutions, even if good ideas, don’t seem remotely up to the task of fixing the problems she identifies.
In the comments she took issue:
I am quite disappointed that you, too, repeat the clearly false assertion that I don’t have solutions to offer. … I originally meant to write a book about what’s going wrong with academia in general, but both my agent and my editor strongly advised me to stick with physics and avoid the sociology. That’s why I kept my elaborations about academia to an absolute minimum. You are right in complaining that it’s sketchy, but that was as much as I could reasonably fit in.
But I have on my blog discussed what I think should be done, eg here. Which is a project I have partly realized, see here. And in case that isn’t enough, I have a 15 page proposal here. On the proposal I should add that, due to space limitations, it does not contain an explanation for why I think that’s the right thing to do. But I guess you’ll figure it out yourself, as we spoke about the “prestige optimization” last week.
I admitted my error:
I hadn’t seen any of those 3 links, and your book did list some concrete proposals, so I incorrectly assumed that if you had more proposals then you’d mention them in your book. I’m happy to support your proposed research project. … I don’t see our two proposals as competing, since both could be adopted.
She agreed:
I don’t see them as competing either. Indeed, I think they fit well .
Then she wrote a whole blog post elaborating!:
And then there are those who, at some time in my life, handed me a piece of the puzzle I’ve since tried to assemble; people I am sorry I forgot about. … For example … Robin Hanson, with whom I had a run-in 10 years ago and later met at SciFoo. I spoke with Robin the other day. … The reason I had an argument with him is that Robin proposed – all the way back in 1990 – that “gambling” would save science. He wanted scientists to bet on the outcomes of their colleagues’ predictions and claimed this would fix the broken incentive structure of academia. I wasn’t fond of Robin’s idea back then. The major reason was that I couldn’t see scientists spend much time on a betting market. …
But what if scientists could make larger gains by betting smartly than they could make by promoting their own research? “Who would bet against their career?” I asked Robin when we spoke last week. “You did,” he pointed out. He got me there. … So, Robin is right. It’s not how I thought about it, but I made a bet. … In other words, yeah, maybe a betting market would be a good idea. Snort.
My thoughts have moved on since 2007, so have Robin’s. During our conversation, it became clear our views about what’s wrong with academia and what to do about it have converged over the years. To begin with, Robin seems to have recognized that scientists themselves are indeed unlikely candidates to do the betting. Instead, he now envisions that higher education institutions and funding agencies employ dedicated personnel to gather information and place bets. …
This arrangement makes a lot of sense to me. First and foremost, it’s structurally consistent. … Second, it makes financial sense. … Third, it is minimally intrusive yet maximally effective. … So, I quite like Robin’s proposal. Though, I wish to complain, it’s too vague to be practical and needs more work. It’s very, erm, academic. …
That’s also why Robin’s proposal looks good to me. It looks better the more I think about it. Three days have passed, and now I think it’s brilliant. Funding agencies would make much better financial investments if they’d draw on information from such a prediction market. Unfortunately, without startup support it’s not going to happen. And who will pay for it?
This brings me back to my book. Seeing the utter lack of self-reflection in my community, I concluded scientists cannot solve the problem themselves. The only way to solve it is massive public pressure. The only way to solve the problem is that you speak up. Say it often and say it loudly, that you’re fed up watching research funds go to waste on citation games. Ask for proposals like Robin’s to be implemented.
As Hossenfelder has been kind enough to consider my proposal in some detail, let me dig a bit into her proposal:
We really need is a practical solution. And of course I have one on offer: An open-source software that allows every researcher to customize their own measure for what they think is “good science” based on the available data. That would include the number of publications and their citations. But there is much more information in the data which currently isn’t used. … individualized measures wouldn’t only automatically update as people revise criteria, but they would also counteract the streamlining of global research and encourage local variety. (more)
We created this website so you can generate a keyword cloud from your publications. You can then download the image and add it to your website or your CV. You can also generate a keyword cloud for your institution so that, rather than listing the always-same five research groups, you can visually display your faculty’s activity. In addition, you can use our website to search for authors with interests close to a list of input keywords or author names. This, so we hope, will aid you in finding speakers for conferences or colloquia while avoiding the biases that creep in when we rely on memory or personal connections. (more)
The basic idea here seems to be that what scientists read today is distorted somehow by their reading systems. For example, instead of just reading the good stuff, current reading systems might induce scientists to read prestigious and popular stuff. If so, then by giving scientists better tools for finding good things to read, distortions would be fewer, good science would be read more, and thereby gain more prestige and popularity, relative to the current situation.
Okay, current systems for finding things to read do probably introduce some distortions. But today there are so many ways to find things to read, and so many ways to make new reading systems, that I really find it hard to see this as the limiting factor. Instead, I expect that incentives are mainly to blame, such as for example biasing readers toward prestigious and popular stuff. You and your papers are looked on more favorably when they are seen as building on other prestigious and popular papers.
Consider an analogy with citations. Someone who is honestly just trying to do good science will sometimes need to read other papers, and when they write papers they will sometimes make enough use of another paper that they should mention that fact as a citation. If we had a big population of people who read and cited papers solely for the purpose of doing science, and whose priorities were representative of science, then we could use stats on who this group read or cited as an indicator of scientific quality. You are higher quality if you write more things that are read or cited. And this quality metric could even be used to hand out publications, jobs, funding, etc.
However, as we all know, simple citation counts have problems when these assumptions don’t hold. When we count papers equally ignoring that some mattered more. Or when we count boring but prestigious and popular papers more. Or when referees lobby to get their works cited. Or when authors trade favors to cite allies. The same sort of incentives the distort who scientists cite can also distort who they read, especially when reading stats are made visible. Like when people today make bots to download their papers from servers to produce big download counts.
I say the main problem is bad incentives, not bad what-to-read tools. So better tools to pursue existing incentives will make only limited gains.
Added 20Dec: Okay, I just talked to Hossenfelder again, and she explained that the intended purpose of her metrics was to rate people when deciding whom to hire. But the problems are similar for metrics to decide who to read: once others can see your metric choice, you have incentives to push metrics that favor you personally, and that make people see you as a wise not-too-nonconformist.
It isn’t bad to expand the space of easily used metrics, but I’d mainly want to push for visible evaluations of metrics on grounds of how useful they are when we all use them. For example, given a matrix of how well each metric predicts future values of other metrics, which metrics seem the most predictive overall?