Compared to academics, non-academics have a less idealistic view of academia. For example, compared to academic economists, non-academic economists see less social value and progress in economics research, more researcher gender influence on that research, and more journal favoritism toward those at top schools or with inside connections. From a
Are there people doing tests of these sort of things? Getting papers published under different names (setting it up on different websites, if need be) to check what happens to the paper? Just knowing that such tests were taking place may be enough to get reviewers to do their job more properly.
Following Carl Shulman, another reason there may be less contradiction than appears is indeed the potential for Boolean fuzziness here. So, my own view, sitting in an editor's chair, is that a majority of decisions do mostly reflect "scientific value." At the same time, some of these other biases are certainly operating, in some cases in varying degrees according to journal. I would tend to agree that the high guess on "author recognition" is right, especially when we are talking about authors who are really at the top of prestige in the profession, like Nobel Prize winners.
Regarding good old boy networks and institution bias, I think these are very journal specific. Thus, it has long been widely believed that two of the top four econ journals have strong such biases, the Quarterly Journal of Economics, based at Harvard, and the Journal of Political Economy, based at Chicago. OTOH, I have seen at least one article that claimed that the papers published in those journals that came out of their respective institutions were actually the better ones in terms of citations, providing the warning that sometimes good old boy networks lead to the pals of the editors sending them their best papers. Not such a simple matter after all.
That is hard to answer precisely. The majority of econ journals do double blind reviewing, while some are single blind. However, it is well known that for one reason or another a reviewer is often able to guess or find out the author(s) of a paper. One factor that has increased this is the increasing tendency for papers to be posted on websites of one sort or another, so that if a reviewer wants to, a bit of googling will turn up the author(s). But this is still not always the case, nevertheless the trend is probably towards less anonymity in reality, even as the majority of journals continue to do double blind, thereby technically maintaining anonymity.
BTW, sometimes reviewers think they know identities of authors when they do not. I remember well a few years ago having a paper rejected at one of the top journals, and getting back a referee report that denounced my paper as "yet another piece of worthless nonsense by 'so and so'." Amusingly enough, I was not 'so and so,' although I had favorably cited an unpublished working paper by 'so and so' (that scum), thereby apparently triggering this presumption by the referee.
For the record, when I took over editing JEBO, it was single blind and I changed it to double blind. However, I have had members of my board urging a return to single blind on the grounds that "everyone can figure out who the author(s) are (is), and if the paper is not up on a website, it is probably not worth publishing anyway." Nevertheless, I continue with double blind, for at least the illusion of even-handedness and fairness and all those supposedly good things.
The respondents were asked whether "most" articles are published solely on the basis of their contribution. They might think that at least 51% of accepted articles are good enough that they would have been accepted even without author recognition, etc, even though the the various biases can be determinative in marginal cases, a minority of accepted articles.
Less obvious inconsistency than you might think. Could be as low as 13% of the respondents.
One should expect that the 45% saying things are strictly scientific will (tend) not (to) be those who are reporting various biases. The maximum bias reported is that of author recognition at 68%. So that would say that there are at 13% who said that there was an author recognition bias, but at the same time everything is strictly scientific. There is also at least 1% saying all scientific but that there is a school bias and at least 5% saying that there is a good old boy network, but also saying all scientific.
I am sure that the numbers contradicting themselves are higher than these, but this exercise also suggests that these numbers might be a lot lower than appears to be the case at first perusal.
Are there people doing tests of these sort of things? Getting papers published under different names (setting it up on different websites, if need be) to check what happens to the paper? Just knowing that such tests were taking place may be enough to get reviewers to do their job more properly.
Following Carl Shulman, another reason there may be less contradiction than appears is indeed the potential for Boolean fuzziness here. So, my own view, sitting in an editor's chair, is that a majority of decisions do mostly reflect "scientific value." At the same time, some of these other biases are certainly operating, in some cases in varying degrees according to journal. I would tend to agree that the high guess on "author recognition" is right, especially when we are talking about authors who are really at the top of prestige in the profession, like Nobel Prize winners.
Regarding good old boy networks and institution bias, I think these are very journal specific. Thus, it has long been widely believed that two of the top four econ journals have strong such biases, the Quarterly Journal of Economics, based at Harvard, and the Journal of Political Economy, based at Chicago. OTOH, I have seen at least one article that claimed that the papers published in those journals that came out of their respective institutions were actually the better ones in terms of citations, providing the warning that sometimes good old boy networks lead to the pals of the editors sending them their best papers. Not such a simple matter after all.
That is hard to answer precisely. The majority of econ journals do double blind reviewing, while some are single blind. However, it is well known that for one reason or another a reviewer is often able to guess or find out the author(s) of a paper. One factor that has increased this is the increasing tendency for papers to be posted on websites of one sort or another, so that if a reviewer wants to, a bit of googling will turn up the author(s). But this is still not always the case, nevertheless the trend is probably towards less anonymity in reality, even as the majority of journals continue to do double blind, thereby technically maintaining anonymity.
BTW, sometimes reviewers think they know identities of authors when they do not. I remember well a few years ago having a paper rejected at one of the top journals, and getting back a referee report that denounced my paper as "yet another piece of worthless nonsense by 'so and so'." Amusingly enough, I was not 'so and so,' although I had favorably cited an unpublished working paper by 'so and so' (that scum), thereby apparently triggering this presumption by the referee.
For the record, when I took over editing JEBO, it was single blind and I changed it to double blind. However, I have had members of my board urging a return to single blind on the grounds that "everyone can figure out who the author(s) are (is), and if the paper is not up on a website, it is probably not worth publishing anyway." Nevertheless, I continue with double blind, for at least the illusion of even-handedness and fairness and all those supposedly good things.
The respondents were asked whether "most" articles are published solely on the basis of their contribution. They might think that at least 51% of accepted articles are good enough that they would have been accepted even without author recognition, etc, even though the the various biases can be determinative in marginal cases, a minority of accepted articles.
Less obvious inconsistency than you might think. Could be as low as 13% of the respondents.
One should expect that the 45% saying things are strictly scientific will (tend) not (to) be those who are reporting various biases. The maximum bias reported is that of author recognition at 68%. So that would say that there are at 13% who said that there was an author recognition bias, but at the same time everything is strictly scientific. There is also at least 1% saying all scientific but that there is a school bias and at least 5% saying that there is a good old boy network, but also saying all scientific.
I am sure that the numbers contradicting themselves are higher than these, but this exercise also suggests that these numbers might be a lot lower than appears to be the case at first perusal.
Thanks Barkley.
Are there people doing tests of these sort of things? Getting papers published under different names (setting it up on different websites, if need be) to check what happens to the paper? Just knowing that such tests were taking place may be enough to get reviewers to do their job more properly.
Following Carl Shulman, another reason there may be less contradiction than appears is indeed the potential for Boolean fuzziness here. So, my own view, sitting in an editor's chair, is that a majority of decisions do mostly reflect "scientific value." At the same time, some of these other biases are certainly operating, in some cases in varying degrees according to journal. I would tend to agree that the high guess on "author recognition" is right, especially when we are talking about authors who are really at the top of prestige in the profession, like Nobel Prize winners.
Regarding good old boy networks and institution bias, I think these are very journal specific. Thus, it has long been widely believed that two of the top four econ journals have strong such biases, the Quarterly Journal of Economics, based at Harvard, and the Journal of Political Economy, based at Chicago. OTOH, I have seen at least one article that claimed that the papers published in those journals that came out of their respective institutions were actually the better ones in terms of citations, providing the warning that sometimes good old boy networks lead to the pals of the editors sending them their best papers. Not such a simple matter after all.
Stuart,
That is hard to answer precisely. The majority of econ journals do double blind reviewing, while some are single blind. However, it is well known that for one reason or another a reviewer is often able to guess or find out the author(s) of a paper. One factor that has increased this is the increasing tendency for papers to be posted on websites of one sort or another, so that if a reviewer wants to, a bit of googling will turn up the author(s). But this is still not always the case, nevertheless the trend is probably towards less anonymity in reality, even as the majority of journals continue to do double blind, thereby technically maintaining anonymity.
BTW, sometimes reviewers think they know identities of authors when they do not. I remember well a few years ago having a paper rejected at one of the top journals, and getting back a referee report that denounced my paper as "yet another piece of worthless nonsense by 'so and so'." Amusingly enough, I was not 'so and so,' although I had favorably cited an unpublished working paper by 'so and so' (that scum), thereby apparently triggering this presumption by the referee.
For the record, when I took over editing JEBO, it was single blind and I changed it to double blind. However, I have had members of my board urging a return to single blind on the grounds that "everyone can figure out who the author(s) are (is), and if the paper is not up on a website, it is probably not worth publishing anyway." Nevertheless, I continue with double blind, for at least the illusion of even-handedness and fairness and all those supposedly good things.
How anonymous (in practice) is the review process in economics?
The respondents were asked whether "most" articles are published solely on the basis of their contribution. They might think that at least 51% of accepted articles are good enough that they would have been accepted even without author recognition, etc, even though the the various biases can be determinative in marginal cases, a minority of accepted articles.
Less obvious inconsistency than you might think. Could be as low as 13% of the respondents.
One should expect that the 45% saying things are strictly scientific will (tend) not (to) be those who are reporting various biases. The maximum bias reported is that of author recognition at 68%. So that would say that there are at 13% who said that there was an author recognition bias, but at the same time everything is strictly scientific. There is also at least 1% saying all scientific but that there is a school bias and at least 5% saying that there is a good old boy network, but also saying all scientific.
I am sure that the numbers contradicting themselves are higher than these, but this exercise also suggests that these numbers might be a lot lower than appears to be the case at first perusal.
Thanks Barkley.
Are there people doing tests of these sort of things? Getting papers published under different names (setting it up on different websites, if need be) to check what happens to the paper? Just knowing that such tests were taking place may be enough to get reviewers to do their job more properly.
Following Carl Shulman, another reason there may be less contradiction than appears is indeed the potential for Boolean fuzziness here. So, my own view, sitting in an editor's chair, is that a majority of decisions do mostly reflect "scientific value." At the same time, some of these other biases are certainly operating, in some cases in varying degrees according to journal. I would tend to agree that the high guess on "author recognition" is right, especially when we are talking about authors who are really at the top of prestige in the profession, like Nobel Prize winners.
Regarding good old boy networks and institution bias, I think these are very journal specific. Thus, it has long been widely believed that two of the top four econ journals have strong such biases, the Quarterly Journal of Economics, based at Harvard, and the Journal of Political Economy, based at Chicago. OTOH, I have seen at least one article that claimed that the papers published in those journals that came out of their respective institutions were actually the better ones in terms of citations, providing the warning that sometimes good old boy networks lead to the pals of the editors sending them their best papers. Not such a simple matter after all.
Stuart,
That is hard to answer precisely. The majority of econ journals do double blind reviewing, while some are single blind. However, it is well known that for one reason or another a reviewer is often able to guess or find out the author(s) of a paper. One factor that has increased this is the increasing tendency for papers to be posted on websites of one sort or another, so that if a reviewer wants to, a bit of googling will turn up the author(s). But this is still not always the case, nevertheless the trend is probably towards less anonymity in reality, even as the majority of journals continue to do double blind, thereby technically maintaining anonymity.
BTW, sometimes reviewers think they know identities of authors when they do not. I remember well a few years ago having a paper rejected at one of the top journals, and getting back a referee report that denounced my paper as "yet another piece of worthless nonsense by 'so and so'." Amusingly enough, I was not 'so and so,' although I had favorably cited an unpublished working paper by 'so and so' (that scum), thereby apparently triggering this presumption by the referee.
For the record, when I took over editing JEBO, it was single blind and I changed it to double blind. However, I have had members of my board urging a return to single blind on the grounds that "everyone can figure out who the author(s) are (is), and if the paper is not up on a website, it is probably not worth publishing anyway." Nevertheless, I continue with double blind, for at least the illusion of even-handedness and fairness and all those supposedly good things.
How anonymous (in practice) is the review process in economics?
The respondents were asked whether "most" articles are published solely on the basis of their contribution. They might think that at least 51% of accepted articles are good enough that they would have been accepted even without author recognition, etc, even though the the various biases can be determinative in marginal cases, a minority of accepted articles.
Less obvious inconsistency than you might think. Could be as low as 13% of the respondents.
One should expect that the 45% saying things are strictly scientific will (tend) not (to) be those who are reporting various biases. The maximum bias reported is that of author recognition at 68%. So that would say that there are at 13% who said that there was an author recognition bias, but at the same time everything is strictly scientific. There is also at least 1% saying all scientific but that there is a school bias and at least 5% saying that there is a good old boy network, but also saying all scientific.
I am sure that the numbers contradicting themselves are higher than these, but this exercise also suggests that these numbers might be a lot lower than appears to be the case at first perusal.