Thirty-four years ago I left physics with a Masters degree, to start a nine year stint doing AI/CS at Lockheed and NASA, followed by 25 years in economics.
"So I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom."
Theory shouting at theory shares the same empirically defective founding postulates. Observe reality, see which part(s) physics omits for mathemtatical convenience, and there's the problem. The solution must exist outside physics.
Physics knows hugely more about aspirin than does chemistry. If you have a headache, you need a chemist not a physicist.
Did you delete my comment proposing a different way to improve science, and also ban me from commenting via my Twitter account that I'd been signed in with? If this is a glitch, automated filter, or misunderstanding, let's fix it. If it's intentional, how is that rational? (BTW I don't appreciate the lack of communication about what the issue is – my comment just disappeared without explanation and clicking the log in with Twitter button results in the message "There was an error submitting the form.") Here is a copy of what I wrote, for reference: http://curi.us/2161-paths-f.... Agree or disagree, it's a on-topic good-faith effort to contribute to the discussion.
In [Can Foundational Physics Be Saved?](http://www.overcomingbias.c..., Robert Hanson proposes prediction markets to evaluate the future impact and value of scientists and research. This is intended to help address cognitive biases and incentive problems (like overhyping the value of one’s own research, and seeking short term popularity with peers, to get funding and jobs).
Markets strike me as too much of a popularity contest where outlier ideas will have low prices. I don’t think letting people bet on things will do a good job of figuring out which are the few positive outliers out of the many mostly-bad outliers. Designing good, objective ways to resolve the bets and pay out the winners will also be very difficult. And historians are often mistaken (in many ways, even more so than the news, which often gets the facts wrong about what happened yesterday), so judging by what future historians will think of today’s scientists is not ideal and can differ from what’s actually true.
So, in line with [Paths Forward](http://fallibleideas.com/pa... (including the additional information linked at the bottom like [Using Intellectual Processes to Combat Bias](https://rationalessays.com/..., I have a different idea about how to improve science. It is *online discussion forums* and *a culture of answering criticism*.
Scientists and research projects should explain what they are doing and why it makes sense, in writing, and anyone in the public should be able to post criticism. Basic standards for discussion tools are listed in footnote [1].
Most readers are reacting by thinking discussions will be low quality and ineffective. There are many cultural norms, discussed in the linked essays and in [books](http://fallibleideas.com/bo..., which can improve discussion quality and rationality. But that’s not enough. People can read about how to have a truth-seeking discussion and then still fail badly. There already exist many low quality online discussions. Why, then, will the ubiquitous use of discussion venues help science?
Because of the expectation of answering every criticism received.
This often won’t be done. What’s the enforcement? First we’ll consider the vast majority of cases where a researcher or research project doesn’t get much attention. A few forums will get too many posts to answer, and we’ll address the later. But suppose some obscure scientist receives one criticism on his forum and ignores it. Now what?
Today, if I find a mistake by a scientist, I can write a blog post explaining the issue and arguing my point. Then I can tweet it out, share it on popular discussion forums, and hopefully draw some attention to it. What will people say? Many will try to debate. They will agree or disagree with my criticism. The Paths Forward approach will transform this situation into a different situation:
I write my criticism and post or link it at the discussion forum for the scientist or research project. They ignore me. Then I say to people: “I wrote X criticism *and* the relevant scientists did not respond.” And no one then debates with me whether my criticism is correct or not. That doesn’t matter. Everyone can clearly see the scientist has violated truth-seeking norms whether my criticism is correct or not. “He did not answer X argument…” is much more objective and clear-cut than “He is wrong because of X argument…”
Norms of having open discussions, where criticism is expected to be answered, would improve the current situation where hardly any criticism is written or answered, and little discussion takes place. And methodological criticisms – that someone did not respond to a criticism – are much easier to evaluate than scientific criticisms.
What if a scientist gives a low quality answer instead of a non-answer? This gives a critic more to work with. He can write a followup criticism. If he does a good job, then it will get progressively harder for the scientist giving a succession of bad answers to avoid saying anything that is wrong *and* easy for many people to evaluate. It’s hard to keep responding to criticism, including followups, and do it badly, but avoid anything that would noticeably look bad.
And this leads into the other main issue: What if scientists get too many criticisms to address and trying to keep up with them consumes lots of time? This would be an issue for popular scientists, and it could also be an issue when an obscure project gets even just one highly persistent critic. Someone could write dozens of followup criticisms that don’t make much sense. Methods for dealing with these issues are explained in the [Paths Forward](http://fallibleideas.com/pa... articles linked earlier, and I’ll go over some main points:
There’s no need to repeat yourself. The more your response to a criticism addresses general principles, the more you can re-use it in response to future inquiries. If people bring up points repetitively, link existing answers (including answers written by other people, which you are willing to take responsibility for just as if you wrote it yourself).
If there is a pattern of error in the criticisms, respond to that pattern itself instead of to each point individually.
If you get a bunch of unique criticisms you’ve never addressed before, you should be happy, even if you suspect the quality isn’t great. You can’t know if they are true without considering what the answers to those criticisms are. It’s a good use of your time to think through new and different criticisms which don’t fall into any pattern you’re already familiar with. That is a thing you can’t have too much of, and which is hard for critics to provide. The world is not full of too many *novel* criticisms. The vast majority of criticisms are boring because they fit into known patterns, like fallacies, and pointing that out and linking to a text addressing the issue is cheap and easy (and if people did that regularly, it would help spread knowledge of those common fallacies and other patterns of intellectual error, to the point that eventually people would stop making those errors so much).
It’s important, with suspected bad ideas, to either address them individually or address them by connecting them to some kind of general pattern which is addressed (sometimes we criticize types or categories of ideas, e.g. there are criticisms of all ad hominem arguments as a group). Ignoring a suspected bad idea with no answer - no ability to actually say what’s bad about it – is irrational and allows for bias and ignoring important, good criticisms. There is no way to know which criticisms are correct or high quality other than answering them. Circumstantial evidence, like whether the first words of sentences are capitalized, whether it uses slang, or whether the author has over 10,000 fans, are bad ways to judge ideas. Ideas should be judged by their content, not their source.
If you get tons of attention, you ought to be able to get some of your many admiring fans to help you out by acting as your proxies and answering common criticisms for you (primarily by handing out standard links). You can give an issue personal attention when your proxies don’t know the answer. You can also hire proxies if you’re popular/important enough to have money. Getting lots of interest in your work, and having resources to deal with it, generally happen proportionally. Using proxies to speak for you is fine as long as you take responsibility for what they do – if someone does a bad job, either address the issue yourself or fire him, but don’t just let it continue and then claim to be answering criticism through your proxies.
I’m not attempting to present an exact set of rules for people to follow, nor an exact set of instructions for what people should do. Science is a creative process. It requires flexibility and individuality. These are rough guidelines which would improve the situation, not exact steps to make it perfect. These proposals would increase the quantity of discussion and offer some improved ways for interested parties to engage. It offers mechanisms for identifying and correcting errors which offer better clarity and transparency when they are violated. I think the same proposal would improve every intellectual field – philosophy, psychology, history, economics – not just science.
[1] Forums should:
- allow *public* access
- have *permalinks* for every comment which are expected to still work in 20 years
- *don’t moderate* or delete content for being disagreeable (only delete things like doxing, shock porn, and spam bots advertising viagra, not mere flaming, ad hominems, rudeness, or profanity)
- no restrictive *length limits*. should be like 100k words, not 10k or 280 characters
- no *time limits* after which additional comments are disabled
- allow *links* to external sites
- support nested *blockquotes*.
These simple standards are egregiously violated by currently popular forums like Twitter, Facebook and Reddit. The violations are intentional, not a technical issue.
Note that people don’t need to run their own forums. Each project can add a new sub-forum on a site that hosts many forums. Technologically, creating thousands of mostly-silent forums can be very cheap and easy. And there can easily be tools to monitor many forums at once and be notified of new posts. This technology pretty much already exists.
Dr.: Hossenfelder: "It was one year after Lee Smolin and Peter Woit published books that were both highly critical of string theory, which has long been one of the major research-bubbles in my discipline. At the time, I was optimistic – or maybe just naïve – and thought that change was on the way. ....It’s not how I thought about it, but I made a bet. The LHC predictions failed. I won. Hurray. Alas, the only thing I won is the right to go around and grumble “I told you so.” What little money I earn now from selling books will not make up for decades of employment I could have gotten playing academia-games by the rules..."
Umm - but what Sabine Hossenfelder really did these times? She jumped into string theory bandwagoon hype and she wrote one publication about extradimensions after another (1, 2, 3, 4, 5, 6, 7, 8,..). Maybe she really suffered during it maybe not, I dunno - but she still wrote it...;-)
Of course these "ugly" speculations turned out to be the same nonsense, like the "beautiful" string theory itself and her book is just conjuncturalist cash cow project: after wit is everyone’s wit. Is it really the most effective way, how to combat with groupthink in science?
One thing I wish would happen: create a nice, asthetic, smooth way for a smart student to go from the two-slit experiment to QED/QCD [in 2-3 years], while having clarity what the input parameters are in the model (=needs to be measured), what is predicted by the model (=this comes out and is measurable), where an approximation is being made, what the current spacetime model is assumed to be, etc. Also, a mathematician should be able to look at this "journey" and approve that it's not too handwavy.
Given that there's "not much to do", it's weird that there isn't a 1% group somewhere out there who thinks this is worthwhile.
A good example (but probably too mathy) is the work of Tamas Matolcsi: `Spacetime without reference frames` and `Ordinary thermodynamics`.
possible typos:not quite as an autonomous a force -> not quite as autonomous a forcebe employees of first that trade -> be employees of firms that tradetrades that they info the collect induces -> trades that the info they collect induces
400,000 theorist-years and some 3 million published pages remain empirically sterile. Black hole-neutron star merger GW170817 killed slow light theories. They resurrected with "corrected" parameters and attacked LIGO signals for being noise. "Noise" exactly predicted where the optical signal would appear - and it did.
One undergrad-day in a microwave rotational spectrometer falsifies non-classical gravitations, dark matter, and SUSY First, we live in a net matter universe. Second, Sakharov criteria allow for necessary conservation violations. Then, use a crafted molecule to falsify beautiful exact vacuum symmetry founding postulates, DOI:10.1002/anie.201704221. Chemists are doers not whiners.
Your note about how your book on academia became focused on physics reminds me of the suggestion that Darwin's Origin of the Species be rewritten to focus on pigeons:
I hadn't seen any of those 3 links, and your book did list some concrete proposals, so I incorrectly assumed that if you had more proposals then you'd mention them in your book. I'm happy to support your proposed research project, even if I'm not quite as hopeful about it as you. I don't see our two proposals as competing, since both could be adopted. My guess/hope is that within the system you propose users would find the market prices that I propose to be especially useful indicators.
As my hero said:
"So I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom."
It's getting harder and harder nowadays.
That's pretty much where I was going, although I understand if a newly minted particle physics Ph.D., reading these comments, became very upset.
This needs a narrative update...but here we go
mazepath(.)com/uncleal/EquivPrinFail.pdf
Theory shouting at theory shares the same empirically defective founding postulates. Observe reality, see which part(s) physics omits for mathemtatical convenience, and there's the problem. The solution must exist outside physics.
Physics knows hugely more about aspirin than does chemistry. If you have a headache, you need a chemist not a physicist.
Did you delete my comment proposing a different way to improve science, and also ban me from commenting via my Twitter account that I'd been signed in with? If this is a glitch, automated filter, or misunderstanding, let's fix it. If it's intentional, how is that rational? (BTW I don't appreciate the lack of communication about what the issue is – my comment just disappeared without explanation and clicking the log in with Twitter button results in the message "There was an error submitting the form.") Here is a copy of what I wrote, for reference: http://curi.us/2161-paths-f.... Agree or disagree, it's a on-topic good-faith effort to contribute to the discussion.
In [Can Foundational Physics Be Saved?](http://www.overcomingbias.c..., Robert Hanson proposes prediction markets to evaluate the future impact and value of scientists and research. This is intended to help address cognitive biases and incentive problems (like overhyping the value of one’s own research, and seeking short term popularity with peers, to get funding and jobs).
Markets strike me as too much of a popularity contest where outlier ideas will have low prices. I don’t think letting people bet on things will do a good job of figuring out which are the few positive outliers out of the many mostly-bad outliers. Designing good, objective ways to resolve the bets and pay out the winners will also be very difficult. And historians are often mistaken (in many ways, even more so than the news, which often gets the facts wrong about what happened yesterday), so judging by what future historians will think of today’s scientists is not ideal and can differ from what’s actually true.
So, in line with [Paths Forward](http://fallibleideas.com/pa... (including the additional information linked at the bottom like [Using Intellectual Processes to Combat Bias](https://rationalessays.com/..., I have a different idea about how to improve science. It is *online discussion forums* and *a culture of answering criticism*.
Scientists and research projects should explain what they are doing and why it makes sense, in writing, and anyone in the public should be able to post criticism. Basic standards for discussion tools are listed in footnote [1].
Most readers are reacting by thinking discussions will be low quality and ineffective. There are many cultural norms, discussed in the linked essays and in [books](http://fallibleideas.com/bo..., which can improve discussion quality and rationality. But that’s not enough. People can read about how to have a truth-seeking discussion and then still fail badly. There already exist many low quality online discussions. Why, then, will the ubiquitous use of discussion venues help science?
Because of the expectation of answering every criticism received.
This often won’t be done. What’s the enforcement? First we’ll consider the vast majority of cases where a researcher or research project doesn’t get much attention. A few forums will get too many posts to answer, and we’ll address the later. But suppose some obscure scientist receives one criticism on his forum and ignores it. Now what?
Today, if I find a mistake by a scientist, I can write a blog post explaining the issue and arguing my point. Then I can tweet it out, share it on popular discussion forums, and hopefully draw some attention to it. What will people say? Many will try to debate. They will agree or disagree with my criticism. The Paths Forward approach will transform this situation into a different situation:
I write my criticism and post or link it at the discussion forum for the scientist or research project. They ignore me. Then I say to people: “I wrote X criticism *and* the relevant scientists did not respond.” And no one then debates with me whether my criticism is correct or not. That doesn’t matter. Everyone can clearly see the scientist has violated truth-seeking norms whether my criticism is correct or not. “He did not answer X argument…” is much more objective and clear-cut than “He is wrong because of X argument…”
Norms of having open discussions, where criticism is expected to be answered, would improve the current situation where hardly any criticism is written or answered, and little discussion takes place. And methodological criticisms – that someone did not respond to a criticism – are much easier to evaluate than scientific criticisms.
What if a scientist gives a low quality answer instead of a non-answer? This gives a critic more to work with. He can write a followup criticism. If he does a good job, then it will get progressively harder for the scientist giving a succession of bad answers to avoid saying anything that is wrong *and* easy for many people to evaluate. It’s hard to keep responding to criticism, including followups, and do it badly, but avoid anything that would noticeably look bad.
And this leads into the other main issue: What if scientists get too many criticisms to address and trying to keep up with them consumes lots of time? This would be an issue for popular scientists, and it could also be an issue when an obscure project gets even just one highly persistent critic. Someone could write dozens of followup criticisms that don’t make much sense. Methods for dealing with these issues are explained in the [Paths Forward](http://fallibleideas.com/pa... articles linked earlier, and I’ll go over some main points:
There’s no need to repeat yourself. The more your response to a criticism addresses general principles, the more you can re-use it in response to future inquiries. If people bring up points repetitively, link existing answers (including answers written by other people, which you are willing to take responsibility for just as if you wrote it yourself).
If there is a pattern of error in the criticisms, respond to that pattern itself instead of to each point individually.
If you get a bunch of unique criticisms you’ve never addressed before, you should be happy, even if you suspect the quality isn’t great. You can’t know if they are true without considering what the answers to those criticisms are. It’s a good use of your time to think through new and different criticisms which don’t fall into any pattern you’re already familiar with. That is a thing you can’t have too much of, and which is hard for critics to provide. The world is not full of too many *novel* criticisms. The vast majority of criticisms are boring because they fit into known patterns, like fallacies, and pointing that out and linking to a text addressing the issue is cheap and easy (and if people did that regularly, it would help spread knowledge of those common fallacies and other patterns of intellectual error, to the point that eventually people would stop making those errors so much).
It’s important, with suspected bad ideas, to either address them individually or address them by connecting them to some kind of general pattern which is addressed (sometimes we criticize types or categories of ideas, e.g. there are criticisms of all ad hominem arguments as a group). Ignoring a suspected bad idea with no answer - no ability to actually say what’s bad about it – is irrational and allows for bias and ignoring important, good criticisms. There is no way to know which criticisms are correct or high quality other than answering them. Circumstantial evidence, like whether the first words of sentences are capitalized, whether it uses slang, or whether the author has over 10,000 fans, are bad ways to judge ideas. Ideas should be judged by their content, not their source.
If you get tons of attention, you ought to be able to get some of your many admiring fans to help you out by acting as your proxies and answering common criticisms for you (primarily by handing out standard links). You can give an issue personal attention when your proxies don’t know the answer. You can also hire proxies if you’re popular/important enough to have money. Getting lots of interest in your work, and having resources to deal with it, generally happen proportionally. Using proxies to speak for you is fine as long as you take responsibility for what they do – if someone does a bad job, either address the issue yourself or fire him, but don’t just let it continue and then claim to be answering criticism through your proxies.
I’m not attempting to present an exact set of rules for people to follow, nor an exact set of instructions for what people should do. Science is a creative process. It requires flexibility and individuality. These are rough guidelines which would improve the situation, not exact steps to make it perfect. These proposals would increase the quantity of discussion and offer some improved ways for interested parties to engage. It offers mechanisms for identifying and correcting errors which offer better clarity and transparency when they are violated. I think the same proposal would improve every intellectual field – philosophy, psychology, history, economics – not just science.
[1] Forums should:
- allow *public* access
- have *permalinks* for every comment which are expected to still work in 20 years
- *don’t moderate* or delete content for being disagreeable (only delete things like doxing, shock porn, and spam bots advertising viagra, not mere flaming, ad hominems, rudeness, or profanity)
- no restrictive *length limits*. should be like 100k words, not 10k or 280 characters
- no *time limits* after which additional comments are disabled
- allow *links* to external sites
- support nested *blockquotes*.
These simple standards are egregiously violated by currently popular forums like Twitter, Facebook and Reddit. The violations are intentional, not a technical issue.
Note that people don’t need to run their own forums. Each project can add a new sub-forum on a site that hosts many forums. Technologically, creating thousands of mostly-silent forums can be very cheap and easy. And there can easily be tools to monitor many forums at once and be notified of new posts. This technology pretty much already exists.
Dr.: Hossenfelder: "It was one year after Lee Smolin and Peter Woit published books that were both highly critical of string theory, which has long been one of the major research-bubbles in my discipline. At the time, I was optimistic – or maybe just naïve – and thought that change was on the way. ....It’s not how I thought about it, but I made a bet. The LHC predictions failed. I won. Hurray. Alas, the only thing I won is the right to go around and grumble “I told you so.” What little money I earn now from selling books will not make up for decades of employment I could have gotten playing academia-games by the rules..."
Umm - but what Sabine Hossenfelder really did these times? She jumped into string theory bandwagoon hype and she wrote one publication about extradimensions after another (1, 2, 3, 4, 5, 6, 7, 8,..). Maybe she really suffered during it maybe not, I dunno - but she still wrote it...;-)
Of course these "ugly" speculations turned out to be the same nonsense, like the "beautiful" string theory itself and her book is just conjuncturalist cash cow project: after wit is everyone’s wit. Is it really the most effective way, how to combat with groupthink in science?
One thing I wish would happen: create a nice, asthetic, smooth way for a smart student to go from the two-slit experiment to QED/QCD [in 2-3 years], while having clarity what the input parameters are in the model (=needs to be measured), what is predicted by the model (=this comes out and is measurable), where an approximation is being made, what the current spacetime model is assumed to be, etc. Also, a mathematician should be able to look at this "journey" and approve that it's not too handwavy.
Given that there's "not much to do", it's weird that there isn't a 1% group somewhere out there who thinks this is worthwhile.
A good example (but probably too mathy) is the work of Tamas Matolcsi: `Spacetime without reference frames` and `Ordinary thermodynamics`.
https://www.amazon.com/s/re...
Can you give links to longer versions of your points, for us non-physics people? Thx.
Fixed; thanks.
Actually, the idea of prediction markets is quite old :-) Immanuel Kant on Supersymmetry: arxiv.org/abs/1003.2967
My comments are here.
possible typos:not quite as an autonomous a force -> not quite as autonomous a forcebe employees of first that trade -> be employees of firms that tradetrades that they info the collect induces -> trades that the info they collect induces
400,000 theorist-years and some 3 million published pages remain empirically sterile. Black hole-neutron star merger GW170817 killed slow light theories. They resurrected with "corrected" parameters and attacked LIGO signals for being noise. "Noise" exactly predicted where the optical signal would appear - and it did.
One undergrad-day in a microwave rotational spectrometer falsifies non-classical gravitations, dark matter, and SUSY First, we live in a net matter universe. Second, Sakharov criteria allow for necessary conservation violations. Then, use a crafted molecule to falsify beautiful exact vacuum symmetry founding postulates, DOI:10.1002/anie.201704221. Chemists are doers not whiners.
Your note about how your book on academia became focused on physics reminds me of the suggestion that Darwin's Origin of the Species be rewritten to focus on pigeons:
http://www.overcomingbias.c...
I don't see them as competing either. Indeed, I think they fit well because one can think of your financial optimization scheme as another measure.
I hadn't seen any of those 3 links, and your book did list some concrete proposals, so I incorrectly assumed that if you had more proposals then you'd mention them in your book. I'm happy to support your proposed research project, even if I'm not quite as hopeful about it as you. I don't see our two proposals as competing, since both could be adopted. My guess/hope is that within the system you propose users would find the market prices that I propose to be especially useful indicators.