I mean, there’s some of this that’s good and there’s some of it that’s bad. In math and computer science, for example, the very fact that you’re publishing your work on a problem in a journal is supposed to MEAN that you didn’t just have a conversation with your friends about the problem that ended up at the usual inconclusive places, but rather that you used the (often highly demanding and unnatural) methods of formal proof to make definite progress on the problem that everyone else can then rely on. And it’s understandable why we’ve set things up that way. On the other hand, certainly for other fields (philosophy? social sciences?) where the stuff that’s in journals is inherently more speculative or open-ended or can’t be conclusively relied upon *anyway*, I would STRONGLY support encouraging more of the sorts of arguments that the academics in question would make around the dinner table, to appear in the journals as well.
Sometimes! It depends on the methods: formal proof? physics calculation? Monte Carlo simulation? statistical analysis? radioisotope dating? Each one can provide insight that you’d never get from “ordinary talk,” but can also be used to bamboozle (eg by answering an irrelevant question). It’s natural for academics to focus on the kinds of evidence they can provide that others typically can’t — but the key is clear labeling of what’s what, and which claim each piece of evidence is directed to, and what it would take to overturn any given claim. I get very annoyed, for example, when (as often) I’m told “you don’t get to question the accepted findings of decolonial theory, unless you want those people questioning the accepted findings of physics or computer science” — but then, when examined, the findings in question turn out just to be gussied-up versions of ordinary talk, which can be countered with more ordinary talk, rather than technical arguments requiring technical refutations.
Sure, sometimes. But what is the typical case? Seems to me that used properly all valid methods can be used to make clear progress. And that includes the many methods of ordinary conversation.
This means, however, that the only ideas you get in the first place are coming from people who care enough to make "definite progress". Like, I already got what I wanted from my own crackpot nonsense; formalizing it to the point where it is publishable is just a lot of misery for little personal benefit.
Speaking as a self-described physics crackpot - I personally believe you are broadly correct, and the gradual shrinking of the domains of science to the strictly conventional has strangled everything.
I'd describe the trend as the conversation in the sciences dying, and being replaced with a soulless exchange of results few people besides the person publishing really care about. I think maybe a lot of the problem is that people don't realize that there had been a conversation, that a conversation is possible; they see the point of journals as to convey truth. The idea that they used to publish thought experiments, I would assume, would read as "They used to have low standards for truthiness", evaluated in modern terms. And to those people I would say: It doesn't need to be results! It doesn't need to be true! The point is the conversation that the journals permitted, allowing exchanges of half-formed ideas, so other people might improve upon them, tell them where they were wrong - and maybe take inspiration and see how the idea might be completed. The point wasn't to have a repository of "known good" knowledge, which is how we got known-good knowledge; Goodhearting that process has not improved it, it has broken it.
Think about the replication crisis. Now look at the idea of "journals as a repository of known-good knowledge". Known-good knowledge was an outcome of the -conversation-, not simply the medium by which the conversation occurred; it was the final argument made in favor of a conclusion, and it was tested by virtue of the fact that somebody was arguing -against- it. It wasn't known-good until the argument was concluded; mere inclusion in a journal didn't turn it into known-good knowledge, it was the entire process - the conversation, the debate, the argument.
As somebody with a half-formed idea, if I end up being correct - granted I fully expect I am correct, self-aware crackpot that I am - I intend to insist that the physics my crackpottery gives rise to be titled "Crackpot Physics", in part because I think the institutions need a reminder: The conversation matters, too.
There are conferences and conversations for conversations, and also see, e.g., Harley 2014 on roots. Discussion exists, but it is limited to a paradigm's concepts, and for a reason.
The fact that there are still pitfalls to avoid and that some of them have not been avoided in the past does not mean that abandoning current measures wholesale is a better measure.
The current measures aren't working; from a scientific perspective, the replacement of the conversation with the "repository of known-good knowledge" is an experiment that has failed.
We have neither lost the public conversation (it just doesn't happen _in those specific avenues_) nor got non-working measures, as you clearly underestimate both the scientific progress and the importance (and effectiveness, which is high!) of weeding out falsehoods. Cf. https://www.lesswrong.com/posts/fhojYBGGiYAFcryHZ/scientific-evidence-legal-evidence-rational-evidence. In effect, we gained a way to distinguish which part is already protected and believing what is merely a probable consequence of our protected beliefs but not a protected belief itself. In particular, you overestimate the importance of replication crisis, overlooking the fact that it was found, that currently even students are routinely told to be on guard for p-hacking.
The identification of a problem is not equivalent to its solution - that it was found doesn't mean it is over. From a scientific evidence perspective, in order to assert that, you'd have to go through the process of replicating another statistically-significant number of claims, and see how many replicate. Has this been done? Is the replication crisis over? Because I just did a brief foray into this question and it appears the answer is "no".
The fact that it was found *isn't in the credit of the institution you are defending*. You don't get to give institutions credit for failing so hard and for so long, just because at some point somebody happened to notice they were failing. Particularly if the problem is still ongoing, and they are still failing.
So - do you have evidence that the replication crisis is over? Or are you just assuming that, because supposedly responsible people know it exists, the problem is solved?
1) At least within mathematics and computer science, academics are definitely willing to accept arguments coming from strange angles - as long as they are valid. However, we have evidence (within mathematics at least) that those arguments are extremely rarely correct, or even in any way insightful. On the other hand, within CS, people outside academia - usually programmers - sometimes do resesearch that then ends up being discovered and "rewritten" in the language that academics understand. One example on top of my head - automatic differentiation libraries have recently proliferated, and only after have theoretical computer scientists understood what are the different ingenious tricks that programmers invented in the implementation. Your hypothesis about the entranched interest of the elite academics does not track my experience in those fields.
2) I would claim that the further a field is removed from "reality check", that is some external process of verifying its claims, and the more it is reliant on the proxy measures of correctness, the less willing it would (and should?) be to accept heterodox science. For example, natural sciences have the advantage of doing controlled experiments, so any method that makes correct predictions would eventually win out.
Isn't it plausible that STEM disciplines (ie the ones with actual reality checks you can point to) have gone so far in the weeds and sub-sub-fields that ordinary conversation simply can't approach the frontier of knowledge + technical measurement capabilities?
Because I certainly think that's true, and it's why you need a few years of literature review / background knowledge plus a few years of mentoring in actual physical experimentation before you can usefully contribute in research. 99.999% of regular folk undergoing ordinary talk simply aren't far out enough on the frontier of knowledge and tech to usefully contribute anything. So ordinary talk is useless from 99.999% of the population.
For the .0001%, they have conferences and a PI and other researchers and other buddies in the field that they can undertake ordinary talk with in order to explore ideas, but then the thing that actually matters is the proof (mathematical or experimental) that lets you actually verify an idea, and academic journals are about promulgating that proof rather than the ordinary talk.
So ordinary talk has its place among the .0001%, sure. But there's a reason we don't publish it or share it more broadly, and it's to save people time and to publish only meaningful / impactful results that have been proved via proof or reality checks. A good journal is a filter that's trying to absorb and promulgate the most important and high impact results, which literally works by filtering out ordinary talk.
“To justify the further habit of academic articles almost never citing ordinary conversation, we’d need to make the further assumption that the very large practice of ordinary conversation almost never makes substantial contributions to topics of academic interest.”
Is this a very hard assumption for you to make? It seems trivially true to me.
I find most people I speak to have interesting insights that could well inspire academic ideas and arguments, as long as I avoid well-worn conversational paths. Why does the statement seem trivially true to you?
Maybe you speak with very smart people most of the time. In my experience, people making casual conversation out of “serious” topics are completely clueless about what they are talking about. Not in the “they disagree with me” vein, but in the “not even aware of jumps in logic” vein.
I see this fairly often because I am well aware of what economists think and say about economics, the stuff they have solved, the stuff they haven’t, etc, and, because people like to talk economics, what the people have to say. The people can’t wrap their heads around how to properly structure arguments, basic, universally recognized economic mechanisms, consensus views on x and y and why these are the consensus views, etc.
It’s not that the average person doesn’t have the views of the average economist. It’s that the average person isn’t even aware there’s something to actually know. There’s plenty clever heterodox economists that one can have productive conversations with, again, it’s not a matter of thinking differently, it’s a matter of thinking rationally, thoroughly.
Sure, most people are not going to have interesting thoughts about a specific topic. But in my experience, speaking with people who have a wide variety of backgrounds, nearly everyone has something interesting to say about topics they know well or have deep experience with. I have heard novel and worth-propagating ideas from nurses that were later published by medical researchers elsewhere, from shopkeepers about logistics, and builders about materials science. I try not to discuss economics with random people.
Most interesting insights require translation anyway. Back here in linguistics, there is, for instance, a distance between hand-wavey "this thing must be marked for that feature and this somehow matters" and the formal notions of relativized minimality and defective intervention, and the dinner-table insight will look like the former.
Your argument is entirely abstract. No examples were given.
I have been active on a discussion site where ordinary people discuss politics, history (and lots of popular culture stuff) from a generational framework. Over 23 years there I have gleaned maybe 6 or 7 useful ideas. That is pretty lean pickings.
Most serious discussion in casual conversation is going to be superficial and not rigorous precisely because it is casual. Rigor takes the fun out of it.
The opening sentence: Yes, true, but why pregnant with implications? The reason is obvious. Academic journal articles aspire to establish new knowledge which is considered sufficiently important to be part of the human knowledge archive. Dinner table conversations, like social media exchanges, do not have that goal. Conversations concern known information, or transient, mostly quotidian situations, such as a personal experience.
Regarding "the entire world of ordinary conversation has no substantial influence on the entire world of academic research." Why would it? Establishing new knowledge is a niche endeavor. In contrast, conversation concerns one's daily social goals of informing, sharing, persuading.
Elsewhere Hanson laments: "They might bond with each other, impress each other, and have fun, but they couldn’t possibly be learning much." People are learning massively during social interaction. they're learning things they didn't already know, but they are not usually creating novel information that no human has previously established.
Hanson's objective appears to be to complain that journal articles specialize; "typical academic disciplines (really sub-disciple) are organized around a very limited set of acceptable concepts and methods." Experts are gatekeepers, using their journals and academic positions to exclude new ideas and maintain their own status. Yawn. This may be true. Plenty of academics have experienced this. We try to move our field in a new direction, and resistance is fierce.
But is this the solution: "Some academic reviewers could specialize in evaluating the concepts and methods of ordinary conversation, to make those available to paper authors." The concepts of ordinary conversation? do you mean like, what one could elicit from a thoughtfully conducted focus group? That's already an established method that academics use to gain new information. But ordinary conversation is about the weather, sports, TV shows, and what are the best restaurants in town. It isn't new knowledge.
Another method that draws on ordinary conversation is qualitative and quantitative analysis of online discussion forums. I've done this; but this is natural human conversation as study object.
We already have arenas where human conversation is content: journalism. Journalists talk with people and report what people said. It is a feature, not a bug, that academia requires new knowledge to be estalished following rules of evidence, not just someone sharing their thoughts.
Ordinary talk is typically not about seeking truth. As you often write, most ordinary human interaction is a form of status game, attempting to gain or associate with prestige.
Isn't the primary innovation of the scientific method, the recognition that human minds are very easy to fool, and in fact to fool themselves? That it takes tremendous discipline to avoid just believing what you want to believe, as opposed to what is true? In ordinary talk, we have anecdotes; in science we require statistical evidence. In ordinary talk we promote hearsay as gossip; in courts we demand first-person testimony. In ordinary talk, we are convinced by confirmation bias; in medicine we require not just blinded studies, but double blinded studies.
Journals restrict academic talk to prohibit anecdotes, hearsay, and unblinded studies -- all of which are a common part of ordinary talk. Isn't that appropriate, because we have learned through painful experience that ordinary talk is not a reliable guide to objective truth?
In certain fields, such as philosophy and perhaps social science, I think you have a point. The basic problem is that these fields rely too much on mimicking prestigious style and content, not enough on making specific arguments for or against a position, and not enough on actual empirical testing. The social dynamics might be described as "incestuous," because there is not enough validation/falsification from objective sources.
However, I think your argument would be improved if you could provide more examples of statements that come up in "ordinary conversation" that would be valuable to a research paper, but that are not currently permitted in one. I'm having trouble imagining such statements.
Ah, well, I guess I'm not familiar enough with the standards of your field. Are you saying, for example, that your recent post about culture being unstable contains valuable reasoning you couldn't publish because it is heterodox? What if you made the paper about a simplified numerical simulation of culture stability to demonstrate your point, and got the same results you claim, and then added the reasoning of your recent post to explain the results. Wouldn't that be publishable?
e.g. by numerical simulation I mean, you could simulate a lot of ant-like agents with cultural rules adjacent agents can exchange to guide their behavior, and meta-rules that determine which exchanges are allowed with what probability. Then you could see which rules help agent "societies" compete best. This would be a lot harder to do than the reasoning of your post, but would also be a lot more convincing, and you could include the reasoning of your post as a prediction/explanation.
So your prediction would be that the simulation could show cultural group selection, and that successful groups would have their size increase, until a point where their effectiveness would decline until the large group collapses. If you can get that to happen with reasonable assumptions that are realistic simplifications of human behavior, then your idea would be supported.
If you could gather a lot of historical data showing that governments tended to collapse as a function of the size of the population, that might also support the point.
Without something like that, the point you made is fairly weak evidence, more like an untested hypothesis. Maybe it's right but it needs confirmation. I don't think it's a bad thing that journals would refuse to publish arguments below a certain threshold of evidentiary strength.
If journals publish too many arguments that are disconnected from concrete evidence, that's clearly pathological, as the discourse in the field can then depart from concrete evidence and wander randomly based on fashion and the speaker's prestige. We see some of that problem in philosophy and some softer sciences, e.g. with the replication crisis.
Reproducibility, via control or statistical inference, is the primary limitation that drives this divide. Observation is insufficient to reproduce and test a hypothesis. Even registered predictions fail the test.
"The sun will come up tomorrow" has 99.999999% predictive accuracy. We have theories that suggest _one day the sun will not rise_ that have sufficient explanatory power that we believe them despite never having any direct evidence from the sun not rising in the past.
"The vast majority of statements that appear in natural human conversation" are not replicable.
"authors are supposed to cite any sources that substantially influenced their article..." No, that's what school chidren are taught. For scholarly articles, a citation informs the reader as to where a statement was established as valid; the citations backs up the statement.
Let me add an example. Academic journals (most? all?) assume a purely materialist worldview, with no God and no soul. Yet in reality pure materialism is the worldview of only a tiny minority. Even among professors, atheists are a minority. Presumably, there are a lot of important concepts that are left out of academic journals for this reason. Academic articles can talk about religiosity but theological assumptions can’t really form the basis of an argument. As a religious guy myself, I think it would be a lot of fun to relax this constraint and see what happens. It could cause some chaos, at least at first, so should be done in a controlled way. I imagine a heterodox journal would be wild at first, but over time it would develop its own orthodoxy, and a new heterodox cycle would be needed. Fun idea.
I don't know if this is what you have in mind, but I think there could be a lot of value in capturing the informal thoughts of highly trained practitioners.
For every theorem that a mathematician has proved and published, there are probably five more things they have strong hunches about. These are not publishable today; at best they propagate haphazardly over tea at conferences.
Exactly. Because there's a qualitative difference between "I have a strong hunch that P != NP" and a proof to that effect, or even between "I have a strong hunch that there are no languages with accusative flagging and ergative indexing" and "here is a 200-language sample of typologically diverse language groups, and they only show the three other options". And this is why the former propagates at conference discussions and the latter gets published.
Most of the socially transmitted conjectures I have heard are more like your linguistics example (specific, suggesting avenues to investigation) than P versus NP.
I mean, the P != NP itself is just such a conjecture, we still don't have a formal proof but many mathematicians have commented on it one way or another :)
Indeed, I think the demand for fairly rigid clear methodological rules is one of the key technologies that allows academic progress. As humans we are highly inclined to be tribal, evaluate claims and criticism primarily based on whether they help or hurt our allies etc...
The fact that one can point to specific methodological rules as having been violated helps us police what gets into journals and critique publications that have been made with less risk of it all being sucked into tribalism.
The philosophy literature is more like what you suggest and I think the unique problems that are created in that literature illustrate the problems such inclusion might create. Consider this paper (linked at bottom) by Sokal critisizing a ridiculously badly argued paper using technical sounding terms to suggest string theory failed bc of too many white men.
This didn't happen because most philosophers are idiots or ideologically blinded. I'm sure most philosophers who read the piece weren't persuaded by the bad arguments. The issue is in this part of philosophy there aren't the same methodological requirements (eg forming up args into semi-math style, clearly defining terms, only speaking literally etc) and as such there is huge interpretational wiggle room.
I'm sure the supporters of the piece will insist that the appeal to Einstein covariance was just analogical, that the author is really just raising possibilities not really claiming to have demonstrated them etc etc.
The methodological demands help prevent this kind of motte-bailey trick -- a particularly dangerous one because we know the same excuses wouldn't be accepted with a different conclusion.
If this piece had been submitted to a more analytic style journal it could have been rejected/critisized for methodological failure reasons that are less likely to be seen as obviously motivated by support for the other tribe.
I don't think many people believe that content unsuitable for academic journals doesn't have substantial value rather the idea is that academic journals should provide an unbiased source of ground truth that can then be used to support and evaluate arguments. I certainly agree there should be some other alternative that isn't an academic journal but some arena where only clearly defined methods and standards are allowed is important.
Obviously, lots of academic works reach the wrong conclusions but the idea is that if we know this study used a random sample of this size and reached this result then the paper is clearly reporting a specific degree of evidence.
Casual conversation is usually vague, uncertain with no hard standards. If I see one of your blog posts in a journal, I have no good way of evaluating what degree of evidence that cite provides and there is a high danger of that academic journals become no better than our newspapers with ideology driving far more than it does now what papers are accepted and which aren't.
--
However, I'd note that works which clearly claim to only be spelling out the considerations/logical form are often accepted in philosophy.
You can cite ordinary talk, rewrite it using ChatGpi AI software. Hey presto. Sounds all clever and worthy of inclusion. All you need is it to be published just once. And your away. Free to use AI software is revolutionary.
I mean, there’s some of this that’s good and there’s some of it that’s bad. In math and computer science, for example, the very fact that you’re publishing your work on a problem in a journal is supposed to MEAN that you didn’t just have a conversation with your friends about the problem that ended up at the usual inconclusive places, but rather that you used the (often highly demanding and unnatural) methods of formal proof to make definite progress on the problem that everyone else can then rely on. And it’s understandable why we’ve set things up that way. On the other hand, certainly for other fields (philosophy? social sciences?) where the stuff that’s in journals is inherently more speculative or open-ended or can’t be conclusively relied upon *anyway*, I would STRONGLY support encouraging more of the sorts of arguments that the academics in question would make around the dinner table, to appear in the journals as well.
But is it really true that academic concepts and methods allow definite clear progress while ordinary talk is endemically inconclusive?
Sometimes! It depends on the methods: formal proof? physics calculation? Monte Carlo simulation? statistical analysis? radioisotope dating? Each one can provide insight that you’d never get from “ordinary talk,” but can also be used to bamboozle (eg by answering an irrelevant question). It’s natural for academics to focus on the kinds of evidence they can provide that others typically can’t — but the key is clear labeling of what’s what, and which claim each piece of evidence is directed to, and what it would take to overturn any given claim. I get very annoyed, for example, when (as often) I’m told “you don’t get to question the accepted findings of decolonial theory, unless you want those people questioning the accepted findings of physics or computer science” — but then, when examined, the findings in question turn out just to be gussied-up versions of ordinary talk, which can be countered with more ordinary talk, rather than technical arguments requiring technical refutations.
Sure, sometimes. But what is the typical case? Seems to me that used properly all valid methods can be used to make clear progress. And that includes the many methods of ordinary conversation.
"...but the, when examined..."
https://plato.stanford.edu/entries/perception-problem/
Watch out for any truth involving variations of the phrase "to be", it's a very tricky concept!!
This means, however, that the only ideas you get in the first place are coming from people who care enough to make "definite progress". Like, I already got what I wanted from my own crackpot nonsense; formalizing it to the point where it is publishable is just a lot of misery for little personal benefit.
Speaking as a self-described physics crackpot - I personally believe you are broadly correct, and the gradual shrinking of the domains of science to the strictly conventional has strangled everything.
I'd describe the trend as the conversation in the sciences dying, and being replaced with a soulless exchange of results few people besides the person publishing really care about. I think maybe a lot of the problem is that people don't realize that there had been a conversation, that a conversation is possible; they see the point of journals as to convey truth. The idea that they used to publish thought experiments, I would assume, would read as "They used to have low standards for truthiness", evaluated in modern terms. And to those people I would say: It doesn't need to be results! It doesn't need to be true! The point is the conversation that the journals permitted, allowing exchanges of half-formed ideas, so other people might improve upon them, tell them where they were wrong - and maybe take inspiration and see how the idea might be completed. The point wasn't to have a repository of "known good" knowledge, which is how we got known-good knowledge; Goodhearting that process has not improved it, it has broken it.
Think about the replication crisis. Now look at the idea of "journals as a repository of known-good knowledge". Known-good knowledge was an outcome of the -conversation-, not simply the medium by which the conversation occurred; it was the final argument made in favor of a conclusion, and it was tested by virtue of the fact that somebody was arguing -against- it. It wasn't known-good until the argument was concluded; mere inclusion in a journal didn't turn it into known-good knowledge, it was the entire process - the conversation, the debate, the argument.
As somebody with a half-formed idea, if I end up being correct - granted I fully expect I am correct, self-aware crackpot that I am - I intend to insist that the physics my crackpottery gives rise to be titled "Crackpot Physics", in part because I think the institutions need a reminder: The conversation matters, too.
There are conferences and conversations for conversations, and also see, e.g., Harley 2014 on roots. Discussion exists, but it is limited to a paradigm's concepts, and for a reason.
That reason fails in view of the replication crisis: Goodhearting truthiness doesn't work.
The fact that there are still pitfalls to avoid and that some of them have not been avoided in the past does not mean that abandoning current measures wholesale is a better measure.
What we have lost: The public conversation.
What we have gained: ???
The current measures aren't working; from a scientific perspective, the replacement of the conversation with the "repository of known-good knowledge" is an experiment that has failed.
We have neither lost the public conversation (it just doesn't happen _in those specific avenues_) nor got non-working measures, as you clearly underestimate both the scientific progress and the importance (and effectiveness, which is high!) of weeding out falsehoods. Cf. https://www.lesswrong.com/posts/fhojYBGGiYAFcryHZ/scientific-evidence-legal-evidence-rational-evidence. In effect, we gained a way to distinguish which part is already protected and believing what is merely a probable consequence of our protected beliefs but not a protected belief itself. In particular, you overestimate the importance of replication crisis, overlooking the fact that it was found, that currently even students are routinely told to be on guard for p-hacking.
The identification of a problem is not equivalent to its solution - that it was found doesn't mean it is over. From a scientific evidence perspective, in order to assert that, you'd have to go through the process of replicating another statistically-significant number of claims, and see how many replicate. Has this been done? Is the replication crisis over? Because I just did a brief foray into this question and it appears the answer is "no".
The fact that it was found *isn't in the credit of the institution you are defending*. You don't get to give institutions credit for failing so hard and for so long, just because at some point somebody happened to notice they were failing. Particularly if the problem is still ongoing, and they are still failing.
So - do you have evidence that the replication crisis is over? Or are you just assuming that, because supposedly responsible people know it exists, the problem is solved?
Two points:
1) At least within mathematics and computer science, academics are definitely willing to accept arguments coming from strange angles - as long as they are valid. However, we have evidence (within mathematics at least) that those arguments are extremely rarely correct, or even in any way insightful. On the other hand, within CS, people outside academia - usually programmers - sometimes do resesearch that then ends up being discovered and "rewritten" in the language that academics understand. One example on top of my head - automatic differentiation libraries have recently proliferated, and only after have theoretical computer scientists understood what are the different ingenious tricks that programmers invented in the implementation. Your hypothesis about the entranched interest of the elite academics does not track my experience in those fields.
2) I would claim that the further a field is removed from "reality check", that is some external process of verifying its claims, and the more it is reliant on the proxy measures of correctness, the less willing it would (and should?) be to accept heterodox science. For example, natural sciences have the advantage of doing controlled experiments, so any method that makes correct predictions would eventually win out.
You might argue that math proofs are the only value argument form in math. But that pattern just doesn't generalize to other disciplines.
I don't see why reality checks being harder implies that the methods of ordinary talk are less valuable.
Isn't it plausible that STEM disciplines (ie the ones with actual reality checks you can point to) have gone so far in the weeds and sub-sub-fields that ordinary conversation simply can't approach the frontier of knowledge + technical measurement capabilities?
Because I certainly think that's true, and it's why you need a few years of literature review / background knowledge plus a few years of mentoring in actual physical experimentation before you can usefully contribute in research. 99.999% of regular folk undergoing ordinary talk simply aren't far out enough on the frontier of knowledge and tech to usefully contribute anything. So ordinary talk is useless from 99.999% of the population.
For the .0001%, they have conferences and a PI and other researchers and other buddies in the field that they can undertake ordinary talk with in order to explore ideas, but then the thing that actually matters is the proof (mathematical or experimental) that lets you actually verify an idea, and academic journals are about promulgating that proof rather than the ordinary talk.
So ordinary talk has its place among the .0001%, sure. But there's a reason we don't publish it or share it more broadly, and it's to save people time and to publish only meaningful / impactful results that have been proved via proof or reality checks. A good journal is a filter that's trying to absorb and promulgate the most important and high impact results, which literally works by filtering out ordinary talk.
“To justify the further habit of academic articles almost never citing ordinary conversation, we’d need to make the further assumption that the very large practice of ordinary conversation almost never makes substantial contributions to topics of academic interest.”
Is this a very hard assumption for you to make? It seems trivially true to me.
I find most people I speak to have interesting insights that could well inspire academic ideas and arguments, as long as I avoid well-worn conversational paths. Why does the statement seem trivially true to you?
Maybe you speak with very smart people most of the time. In my experience, people making casual conversation out of “serious” topics are completely clueless about what they are talking about. Not in the “they disagree with me” vein, but in the “not even aware of jumps in logic” vein.
I see this fairly often because I am well aware of what economists think and say about economics, the stuff they have solved, the stuff they haven’t, etc, and, because people like to talk economics, what the people have to say. The people can’t wrap their heads around how to properly structure arguments, basic, universally recognized economic mechanisms, consensus views on x and y and why these are the consensus views, etc.
It’s not that the average person doesn’t have the views of the average economist. It’s that the average person isn’t even aware there’s something to actually know. There’s plenty clever heterodox economists that one can have productive conversations with, again, it’s not a matter of thinking differently, it’s a matter of thinking rationally, thoroughly.
Sure, most people are not going to have interesting thoughts about a specific topic. But in my experience, speaking with people who have a wide variety of backgrounds, nearly everyone has something interesting to say about topics they know well or have deep experience with. I have heard novel and worth-propagating ideas from nurses that were later published by medical researchers elsewhere, from shopkeepers about logistics, and builders about materials science. I try not to discuss economics with random people.
Most interesting insights require translation anyway. Back here in linguistics, there is, for instance, a distance between hand-wavey "this thing must be marked for that feature and this somehow matters" and the formal notions of relativized minimality and defective intervention, and the dinner-table insight will look like the former.
Your argument is entirely abstract. No examples were given.
I have been active on a discussion site where ordinary people discuss politics, history (and lots of popular culture stuff) from a generational framework. Over 23 years there I have gleaned maybe 6 or 7 useful ideas. That is pretty lean pickings.
Most serious discussion in casual conversation is going to be superficial and not rigorous precisely because it is casual. Rigor takes the fun out of it.
The opening sentence: Yes, true, but why pregnant with implications? The reason is obvious. Academic journal articles aspire to establish new knowledge which is considered sufficiently important to be part of the human knowledge archive. Dinner table conversations, like social media exchanges, do not have that goal. Conversations concern known information, or transient, mostly quotidian situations, such as a personal experience.
Regarding "the entire world of ordinary conversation has no substantial influence on the entire world of academic research." Why would it? Establishing new knowledge is a niche endeavor. In contrast, conversation concerns one's daily social goals of informing, sharing, persuading.
Elsewhere Hanson laments: "They might bond with each other, impress each other, and have fun, but they couldn’t possibly be learning much." People are learning massively during social interaction. they're learning things they didn't already know, but they are not usually creating novel information that no human has previously established.
Hanson's objective appears to be to complain that journal articles specialize; "typical academic disciplines (really sub-disciple) are organized around a very limited set of acceptable concepts and methods." Experts are gatekeepers, using their journals and academic positions to exclude new ideas and maintain their own status. Yawn. This may be true. Plenty of academics have experienced this. We try to move our field in a new direction, and resistance is fierce.
But is this the solution: "Some academic reviewers could specialize in evaluating the concepts and methods of ordinary conversation, to make those available to paper authors." The concepts of ordinary conversation? do you mean like, what one could elicit from a thoughtfully conducted focus group? That's already an established method that academics use to gain new information. But ordinary conversation is about the weather, sports, TV shows, and what are the best restaurants in town. It isn't new knowledge.
Another method that draws on ordinary conversation is qualitative and quantitative analysis of online discussion forums. I've done this; but this is natural human conversation as study object.
We already have arenas where human conversation is content: journalism. Journalists talk with people and report what people said. It is a feature, not a bug, that academia requires new knowledge to be estalished following rules of evidence, not just someone sharing their thoughts.
Ordinary talk is typically not about seeking truth. As you often write, most ordinary human interaction is a form of status game, attempting to gain or associate with prestige.
Isn't the primary innovation of the scientific method, the recognition that human minds are very easy to fool, and in fact to fool themselves? That it takes tremendous discipline to avoid just believing what you want to believe, as opposed to what is true? In ordinary talk, we have anecdotes; in science we require statistical evidence. In ordinary talk we promote hearsay as gossip; in courts we demand first-person testimony. In ordinary talk, we are convinced by confirmation bias; in medicine we require not just blinded studies, but double blinded studies.
Journals restrict academic talk to prohibit anecdotes, hearsay, and unblinded studies -- all of which are a common part of ordinary talk. Isn't that appropriate, because we have learned through painful experience that ordinary talk is not a reliable guide to objective truth?
In certain fields, such as philosophy and perhaps social science, I think you have a point. The basic problem is that these fields rely too much on mimicking prestigious style and content, not enough on making specific arguments for or against a position, and not enough on actual empirical testing. The social dynamics might be described as "incestuous," because there is not enough validation/falsification from objective sources.
However, I think your argument would be improved if you could provide more examples of statements that come up in "ordinary conversation" that would be valuable to a research paper, but that are not currently permitted in one. I'm having trouble imagining such statements.
Much of my work consists of finding insights via methods not seen as sophisticated enough to include in academic journals.
Ah, well, I guess I'm not familiar enough with the standards of your field. Are you saying, for example, that your recent post about culture being unstable contains valuable reasoning you couldn't publish because it is heterodox? What if you made the paper about a simplified numerical simulation of culture stability to demonstrate your point, and got the same results you claim, and then added the reasoning of your recent post to explain the results. Wouldn't that be publishable?
e.g. by numerical simulation I mean, you could simulate a lot of ant-like agents with cultural rules adjacent agents can exchange to guide their behavior, and meta-rules that determine which exchanges are allowed with what probability. Then you could see which rules help agent "societies" compete best. This would be a lot harder to do than the reasoning of your post, but would also be a lot more convincing, and you could include the reasoning of your post as a prediction/explanation.
So your prediction would be that the simulation could show cultural group selection, and that successful groups would have their size increase, until a point where their effectiveness would decline until the large group collapses. If you can get that to happen with reasonable assumptions that are realistic simplifications of human behavior, then your idea would be supported.
The point is that you'd have to create such a simulation to be publishable, you couldn't just make the point qualitatively as I did.
If you could gather a lot of historical data showing that governments tended to collapse as a function of the size of the population, that might also support the point.
Without something like that, the point you made is fairly weak evidence, more like an untested hypothesis. Maybe it's right but it needs confirmation. I don't think it's a bad thing that journals would refuse to publish arguments below a certain threshold of evidentiary strength.
If journals publish too many arguments that are disconnected from concrete evidence, that's clearly pathological, as the discourse in the field can then depart from concrete evidence and wander randomly based on fashion and the speaker's prestige. We see some of that problem in philosophy and some softer sciences, e.g. with the replication crisis.
Or build a mathematical model thereof, saying that this is the model's prediction. Yes.
Reproducibility, via control or statistical inference, is the primary limitation that drives this divide. Observation is insufficient to reproduce and test a hypothesis. Even registered predictions fail the test.
"The sun will come up tomorrow" has 99.999999% predictive accuracy. We have theories that suggest _one day the sun will not rise_ that have sufficient explanatory power that we believe them despite never having any direct evidence from the sun not rising in the past.
"The vast majority of statements that appear in natural human conversation" are not replicable.
"authors are supposed to cite any sources that substantially influenced their article..." No, that's what school chidren are taught. For scholarly articles, a citation informs the reader as to where a statement was established as valid; the citations backs up the statement.
Let me add an example. Academic journals (most? all?) assume a purely materialist worldview, with no God and no soul. Yet in reality pure materialism is the worldview of only a tiny minority. Even among professors, atheists are a minority. Presumably, there are a lot of important concepts that are left out of academic journals for this reason. Academic articles can talk about religiosity but theological assumptions can’t really form the basis of an argument. As a religious guy myself, I think it would be a lot of fun to relax this constraint and see what happens. It could cause some chaos, at least at first, so should be done in a controlled way. I imagine a heterodox journal would be wild at first, but over time it would develop its own orthodoxy, and a new heterodox cycle would be needed. Fun idea.
I don't know if this is what you have in mind, but I think there could be a lot of value in capturing the informal thoughts of highly trained practitioners.
For every theorem that a mathematician has proved and published, there are probably five more things they have strong hunches about. These are not publishable today; at best they propagate haphazardly over tea at conferences.
Exactly. Because there's a qualitative difference between "I have a strong hunch that P != NP" and a proof to that effect, or even between "I have a strong hunch that there are no languages with accusative flagging and ergative indexing" and "here is a 200-language sample of typologically diverse language groups, and they only show the three other options". And this is why the former propagates at conference discussions and the latter gets published.
Most of the socially transmitted conjectures I have heard are more like your linguistics example (specific, suggesting avenues to investigation) than P versus NP.
I mean, the P != NP itself is just such a conjecture, we still don't have a formal proof but many mathematicians have commented on it one way or another :)
Indeed, I think the demand for fairly rigid clear methodological rules is one of the key technologies that allows academic progress. As humans we are highly inclined to be tribal, evaluate claims and criticism primarily based on whether they help or hurt our allies etc...
The fact that one can point to specific methodological rules as having been violated helps us police what gets into journals and critique publications that have been made with less risk of it all being sucked into tribalism.
The philosophy literature is more like what you suggest and I think the unique problems that are created in that literature illustrate the problems such inclusion might create. Consider this paper (linked at bottom) by Sokal critisizing a ridiculously badly argued paper using technical sounding terms to suggest string theory failed bc of too many white men.
This didn't happen because most philosophers are idiots or ideologically blinded. I'm sure most philosophers who read the piece weren't persuaded by the bad arguments. The issue is in this part of philosophy there aren't the same methodological requirements (eg forming up args into semi-math style, clearly defining terms, only speaking literally etc) and as such there is huge interpretational wiggle room.
I'm sure the supporters of the piece will insist that the appeal to Einstein covariance was just analogical, that the author is really just raising possibilities not really claiming to have demonstrated them etc etc.
The methodological demands help prevent this kind of motte-bailey trick -- a particularly dangerous one because we know the same excuses wouldn't be accepted with a different conclusion.
If this piece had been submitted to a more analytic style journal it could have been rejected/critisized for methodological failure reasons that are less likely to be seen as obviously motivated by support for the other tribe.
https://journalofcontroversialideas.org/article/3/2/260
I don't think many people believe that content unsuitable for academic journals doesn't have substantial value rather the idea is that academic journals should provide an unbiased source of ground truth that can then be used to support and evaluate arguments. I certainly agree there should be some other alternative that isn't an academic journal but some arena where only clearly defined methods and standards are allowed is important.
Obviously, lots of academic works reach the wrong conclusions but the idea is that if we know this study used a random sample of this size and reached this result then the paper is clearly reporting a specific degree of evidence.
Casual conversation is usually vague, uncertain with no hard standards. If I see one of your blog posts in a journal, I have no good way of evaluating what degree of evidence that cite provides and there is a high danger of that academic journals become no better than our newspapers with ideology driving far more than it does now what papers are accepted and which aren't.
--
However, I'd note that works which clearly claim to only be spelling out the considerations/logical form are often accepted in philosophy.
Pretty sure you must already be aware of his stuff (and may even know him(!)—Adam Mastroianni makes some different, but adjacent arguments about research done inside vs outside of academia. e.g.: https://www.experimental-history.com/p/an-invitation-to-a-secret-society
I reckon you guys would have fun exchanging in a conversation!
You can cite ordinary talk, rewrite it using ChatGpi AI software. Hey presto. Sounds all clever and worthy of inclusion. All you need is it to be published just once. And your away. Free to use AI software is revolutionary.
Dr. Hanson claims it is not about the form but about the content.
Fair enough