Megan Has A Point

Reacting to a new study on academia’s left-leanings, Megan McArdle explains how bias could result: 

Unless they are really, really brilliant, academics, like everyone else, need personal connections to help them up the academic ladder, from recommendations to mentors to advisors. Those personal connections are always much easier to make with people you agree with. Nor would I discount the possibility that, just as women’s work can be subtly dismissed because we know women aren’t as bright as men, academics who think that conservatives are stupid would factor that into their assessment of someone’s intelligence–and then factor that assessment into their assessment of someone’s work. And of course, one’s ideas are to some extent socially constructed; simply by virtue of the arguments and information we hear, even if there is no social pressure to conform, being surrounded by a political culture will tend to drag our ideas in their direction.


And the idea that academia exerts no pressures to conform is spectacularly hilarious to anyone who’s ever spent any time at all around academics. Perhaps the funniest sight I have ever witnessed is the spectacle of a sociologist cruising straight past the analyses of power relationships and group norms that they apply to every single other facet of human existence, and insisting that the underrepresentation of conservatives in academic could only be explained by the fact that conservatives are a bunch of money-grubbing intellectual lightweights who can’t stand rigorous examinations of their ideas, and moreover are too intolerant to fit into the academic community.

The sociologist, you see, is inside academia, and so able to analyze it better than outsiders. Also, the sociologist knows that neither they, nor any of their friends, is biased, so the answer must be that there’s something wrong with conservatives.

It’s odd, given this lack of bias, that one repeatedly hears from untenured academics who are in the closet. "Passing" is not usually a behavior one finds in a community where there is no prejudice.

Ironically, itself illustrates this point nicely.  It starts with a long rant complaining that this subject has had too many sloppy studies by ideologically motivated conservatives, such as my colleague Dan Klein, so thank God they can finally offer us an objective analysis.  And then they completely confirm Klein’s results.  Noteworthy details from the study include that support is nearly equal for Israel and Palestine, and that only one quarter think discrimination is the reason for fewer women scientists. 

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • William Newman

    It seems awfully suggestive to me that the most overwhelmingly leftist fields tend to be those where excellence and correctness as assessed by insiders most resists cross-checking by outsiders. In fields close to the natural sciences, engineering, mathematics, or economics, ideas tend to get tested hard and it tends to be easy for excellent practitioners to demonstrate excellence. The Phillips curve was easier to check than Margaret Mead’s sociology, Lysenkoism was easier to check than Freudianism, and once you get into the physical sciences and engineering the comparison hardly seems worth discussing. And in such fields there are many opportunities for practitioners to be tested against the real world, ranging from famous ones like LTCM (see Google if you don’t remember) to obscure ones like the Academy of Model Aeronautics (see the story in the comments of http://jetcityjournal.typepad.com/my_weblog/2007/10/burt-rutan-spea.html). If the overrepresentation of leftists were really because nonleftists are too closedminded or stupid to do good work, or too greedy to deign to work for a professor’s salary, then I’d really like to know why the pattern should be relatively weak in fields where there are many independent cross-checks on excellence, while so strong in fields where it is nearly unknowable (except through deviousness like the Sokal hoax, anyway) whether insiders have abandoned excellence in favor of inbred backscratching politics.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    William, you make an interesting suggestion, but is there a way to more objectively measure how much a field “resists cross-checking by outsiders”?

  • William Newman

    Robin, I think it resists quantification, but that should be somewhat unsurprising, because it seems to contain the problem of falsifiability, which is also hard to quantify.

    Zero falsifiability is pretty easy. I can say Intelligent Design (or the kind of self-identified Marxism discussed in the survey — from memory, not calling for a particular political outcome but a preferred way of analyzing social situations) is unfalsifiable if its practitioners can’t come up with a single hypothetical scenario in which the weight of evidence would convince them it was mistaken.

    But nonzero falsifiability is surprisingly tricky — Lakatos had some good points, illustrating them by Newton’s laws, where weird swerves in trajectories could be explained away leaving Newton’s laws unchanged and postulating interactions with other bodies which just haven’t been found yet. I think the problem is soluble in principle. I even have a handwaving candidate solution (probably reinventing someone else’s ideas, but I don’t know whose): “take all your observational data and compress it (to find its Kolmogorov complexity). Now, take a library which expresses Newton’s laws as given, and using that library, compress your observational data again. The second compression will be more effective.” But even if my handwaving solution is theoretically valid, few would say it is un-tricky. (And my philosophy of science TA didn’t like it at all.:-) At the moment I don’t even have a candidate solution for rigorously defining “cross-checkability by outsiders.”

  • Ingo

    I have a hard time taking this or other generalizations seriously, regardless of which side they lean to and that is because of fuzzy terminology and the questionable use of quantitative methodology. What, exactly, is “liberal”, “left-leaning” or “conservative”? Its bad enough that politics is based on such oversimplifications.

    Now, the study itself tries to drill into specific topics, but I’m hard pressed to see the investigative value of propositions like ‘Business corporations make too much profit’. My answer would be: “Some do, some don’t, and my judgement really depends on the product they make and the value it provides”. Not, thats a qualitative answer, but even quantitative studies can allow for such diversifications.

    I also have trouble with the assignment to positions of these statements and while I have no proof readily available, I would bet that position on statements like the above has varied considerably in the various “camps”, if you will, over time and with age-group. My 30-something friends would rate my above statement as moderate-to-lefytish, while its also my grandfathers position and he is definetely conservative by their standards. So what gives?

  • Neel Krishnaswami

    William, your handwaving solution doesn’t work, but it doesn’t work in an interesting way.

    In order to make your procedure reliable, what you need to do is use Solomonoff universal induction. The idea is that you consider inputs to a universal Turing machine accepting prefix-free programs as your space of theories. Then, you assign each theory T one a prior probability equal to 2^{-K(T)}, where K(T) is its Kolmogorov complexity. (This is why the restriction to prefix-free is important — it makes the probabilities normalize correctly.) Now, as you get observations, you update your prior in the standard way using Bayes’s law. Then, you can prove that if there is a computable description of your observations, your beliefs will converge to it. Furthermore, this satisfies the principle of multiplicity (it doesn’t reject possible theories), Occam’s razor (it believe simple theories more), and Bayesian update.

    Unfortunately, it’s also wildly noncomputable. Not only do you need to calculate the Kolmogorov complexity, you need to do it for infinitely many programs, and then update a prior that requires impossibly too much computation and information to feasibly store.

    Basically, it’s one of the coolest results in algorithmic information theory, but has no relevance whatsoever to how science is or ought to be done. Solomonoff induction approximates model selection with a device (Kolmogorov complexity) too crude to be of any real guide to practice. To forestall an objection: you can’t claim that minimum description length is an approximation to the K-complexity. That’s because changing your description language (your underlying universal Turing machine) can only change the Kolmogorov complexity of a program by a constant, and this won’t be true for actually computable compression schemes.

  • Alan Gunn

    Megan’s explanation makes sense, though I’d change one detail. I don’t think leftists think right-wingers are stupid, I think they think we’re bad people. So if a right-wing candidate who is obviously brilliant appears, it’s still seems OK for a leftist to reject that candidate. Those on the right, by contrast, tend to think that leftists are dumb, so an obviously bright leftist doesn’t really attract opposition from right-wing incumbents. Over time, these attitudes lead to a left-leaning academy (especially if, as in the humanities, there are no immediate adverse practical consequences if you hire the third-best prospect in your field).

    Does anybody have a good explanation for what causes people to favor extreme political views (in both directions)? It seems plain enough to me that there are lots of smart, decent people in both camps. Yet partisan battles take weird and sometimes really dumb forms. In Indiana a few years ago, we had a bitter debate, decided by a nearly party-line vote in the legislature, over whether to adopt daylight time. How can that be anything but nuts? If you didn’t know, could you even guess which party favored daylight time and which opposed it?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I’m guessing the Republicans opposed it.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Ingo, stats on who academics vote for does not seem very fuzzy at all.

    Alan, standard human coalition formation often punishes people who seem to be trying to belong to all the coalitions, instead of choosing sides.

  • http://profile.typekey.com/jdannan/ James Annan

    I’m sure it was just a typo on your part, but the question about gender discrimination was actually asking about the *main* reason for fewer females, not “the reason”. Anyway, it’s hardly a surprise that not that many professors were prepared to admit that their discrimination was the biggest factor – perhaps you can think of a *bias* that would influence their answers.

  • Alan Gunn

    EY wrote (about daylight time):

    “I’m guessing the Republicans opposed it.”

    No, it was the other way around. This even became an issue, of a sort, in our last congressional election, even though it’s a state matter. The democrat challenging a republican incumbent made speeches in which he criticized the incumbent for not phoning our governor and telling him daylight time was a bad thing. I can understand people disagreeing about whether daylight time is a good idea. But the notion that the disagreement should follow party lines seems odd. In Indiana, for the most part, there seems to be no real ideological difference between the parties, which might as well be called “Owls” and “Panthers” as “Republicans” and “Democrats.” But they nevertheless seem to inspire strong loyalties, which then manifest themselves in ardent attachments to minor issues.

  • http://www.mccaughan.org.uk/g/ g

    I read the first part of the new study and skimmed the rest, and I have to say that Robin’s statement “It starts with a long rant complaining that this subject has had too many sloppy studies by ideologically motivated conservatives, such as my colleague Dan Klien, so thank God they can finally offer us an objective analysis.” appears to me to be completely false. The actual pattern is:

    Two pages of historical description, with no ranting and nothing about ideologically motivated conservatives.

    *One paragraph* saying that in the last decade there’s been a lot of conservative activism on the theme of a “liberal hegemony” in higher education, and claiming that much recent research on academics’ political views has been “beholden to this agenda”. No mention of Klein in this paragraph.

    One paragraph saying “we’re trying to do better”. This paragraph does, in passing, mention Klein; it says that one of his studies characterized US academics as “nearly uniformly” extremely liberal It doesn’t claim that Klein said this because of a political agenda, and if Klein actually did say that then this study doesn’t “completely confirm” it.

    Several more pages of historical discussion. No complaints about right-wing ideologues or anything of the sort until page 12, where finally they mention Klein again, not to dismiss his work as ideology-driven (though I agree they suggest that) but to mention a number of concrete things that they allege are problems with that work — which they say mean it’s hard to have confidence in it, not that it’s necessarily wrong, so it’s no big surprise if their final results are similar to Klein’s.

    They characterize some criticism of work like Klein’s from the left as being ideology-driven and sloppy, too.

    Anyway: finding possible explanations for the difference in political views between US academia and US society at large is easy. Very easy. What would be much more interesting is presenting some actual evidence that makes it possible to choose between those explanations.

  • stuart

    “I don’t think leftists think right-wingers are stupid”

    I heard John Ralston Saul talking about his book “The Collapse of Globalism: And the Reinvention of the World”. Someone asked him at the end if he though free-market economists were bad people since they obviously were very smart. He responded that he didn’t think they were smart at all, he included Milton Friedman.

    Just an anecdote.

  • William Newman

    Neel, thank you for the explanation and accepted technical terms I can use as pointers into the literature.

    Actually, I think my approach has more limitations than that, and in the context of trying to extend it to what exactly it means to say that math or chemistry is more checkable than lit crit, some of the other limitations might be comparably important. For example, at the level of either your analysis or mine, Newton’s Laws and Einstein’s General Relativity seem to be essentially equivalent, both theories are very precise and falsifiable. But to a human in 1918 or 2007, they are very different in falsifiability. It takes only a weekend and a very modest experimental budget to give Newton’s laws a workout, while getting data compressible with GR calls for things like very accurate measurements of the precession of the orbit of Mercury, slowdown of neutron star orbits, or timing of orbiting GPS clocks. Trying to quantify differences like that seems difficult. And without some way to quantify differences like that, a quantitative comparison of falsifiability of different fields would tend to break down on things like grand unified field theories and string theory, and on some of the medical-effectiveness questions that Robin has written about elsewhere.

    But I think even though the fundamental precision of the concept is less than one might like, it’s still reasonable to say that math and engineering are more cross-checkable than literary criticism, much as it’s reasonable to say a salmon is faster than a jellyfish even if “faster than” isn’t so imprecisely defined that one can’t necessarily say whether a salmon is faster than a rabbit.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    g, I’m sorry you think I’ve misrepresented the article; can anyone else read that paper intro and adjudicate between our readings?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    EY wrote (about daylight time):

    “I’m guessing the Republicans opposed it.”

    No, it was the other way around.

    The guess was based on the distant off-chance that allegedly scientific studies were remotely correct about Republicans being in any sense cognitively conservative. Guess not.

  • http://www.mccaughan.org.uk/g/ g

    Eliezer, it looks like you’re inferring too much. Why is it impossible, or even implausible, that (1) Republicans are on average more conservative cognitively but (2) “tribal” loyalties can swamp #1 and (3) in this case it happened that the original proponent of change happened to be a Republican?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I don’t know Klien’s results so I don’t know if this study confirmed it or not.

    Gross and Simmons are obviously, blatantly working from a Democratic political bias. For example, they dismiss by argument from flawed motives the studies done by Republicans – which, when dealing with experimental reports, is ad hominem – but when the studies are done by Democrats, Gross and Simmons don’t dismiss the studies because of their political agendas, but actually speak of the agendas approvingly! That’s not just bias, it’s lack of self-awareness.

    I would say that g read what the authors wanted readers to see, and Hanson automatically read through to the authors’ intentions – it seems to me correctly so, but it is still an inference. Remember also that what would be an extremely mild opinion piece in a newspaper may well be a “rant” by the standards of scientific journals.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    g, it’s not impossible, or even implausible, but I made a trial of my guess and it came out negative. I try to notice when I am confused. In a politically charged case like this, trying to guess without already knowing the answer, counts as a more reliable trial of a hypothesis than any amount of post facto rationalization. Of course it is only one piece of evidence.

  • William Newman

    I skimmed the paper.

    I don’t quite agree with Robin’s description of the introduction as a long rant, but I don’t think his description is much of a stretch. I would say the introduction is a long embarrassment. The authors start by praising a study of academic politics apparently commissioned to fight McCarthyism (though as they say, in the event “_The Academic Mind_ was published too late to be any help in the fight against McCarthy). They go on to complaining that “in the 1990s […] an unfortunate tendency became evident: increasingly, those social scientists who turned their attention to professors and their politics, and employed the tools of survey research, had as their goal simply to highlight the the liberalism of the professoriate in order to provide support for conservatives urging the reform of American colleges and universities.” I’m sorry, nasty and screwed up though McCarthyism was, when you start by praising a study commissioned as political ammunition against McCarthyism, you are in a very poor position to disapprove of other studies for being “beholden to this agenda” just because “this agenda” happens to be one of which you disapprove.

    I’m not trying to excuse shoddy work. It’s common for shoddy work to be caught by people who have personal or political reasons to oppose it, and that’s fine with me. But instead of calling out the agenda-beholden allegedly low-quality research for its specific flaws, the authors fade into a near-rant here (around pp. 1-2). “A few sociologists continued to produce high quality work on the topic” and their papers are cited and praised without even a hint as to what was good about them except the appropriateness of their agenda. Then the opposed work of allegedly lower quality isn’t even named, much less cited and specifically criticized. And the authors go on to write “with this essay we take a step toward moving the study of professorial politics back into the domain of mainstream sociological inquiry.” They expect people to read past the internal evidence in the introduction without becoming skeptical about whether being in the domain of mainstream sociological inquiry is a good thing?

    On an unrelated note, I noticed patterns in their detailed numbers which don’t match my claimed general pattern at all. E.g., on p. 34, 13:31 Democrats to Republicans in electrical engineering vs. 28:6 in mechanical engineering? That’s a pretty impressive difference in ratios for a pretty similar pair of fields. But not only does it not match my pattern, it doesn’t seem to follow from anyone’s explanation about what’s going on. It seems quite weird, actually, unless it is just something that doesn’t need explaining, such as sampling noise in very small sample buckets.

  • http://www.mccaughan.org.uk/g/ g

    Eliezer (re Republicans and cognitive conservatism): of course I agree that you’ve just acquired a little evidence against the idea that Republicans are cognitively more conservative; I just don’t see how it can possibly be enough evidence to justify saying “Guess not.” about that idea being “remotely correct”.

    Eliezer and William (re different politics of US academics and US population), I find your descriptions of this study every bit as tendentious as anything in it. A few specifics:

    1. [William] “instead of calling out the agenda-beholden allegedly low-quality research for its specific flaws …”: But they do talk about specific flaws of at least some of the research they see as agenda-driven, later on. (Pages 12-15, about Klein’s work.) Do you think it’s impermissible for an author to say “such-and-such a body of work had such-and-such a general weakness” if they don’t then analyse all of that work?

    2. [William] “and their papers are cited and praised without even a hint about what was good about them except the appropriateness of their agenda”: There isn’t the least suggestion that what was good about these papers was the appropriateness of their agenda. (It seems to me that there is a big difference between saying that an inappropriate agenda is a problem, and saying that an appropriate agenda makes work good.)

    3. [Eliezer] “they dismiss by argument from flawed motives the studies done by Republicans”: where? It seems to me that they engage with what they consider to be the best such studies, and make specific complaints about what they think is wrong with them (example: Klein, as already mentioned, though Klein isn’t exactly a Republican), which is as much as they do for any of the studies done by liberals. (With one exception: they have a lot to say about Ladd and Lipset, mostly positive. But they mention a number of problems with that too.) They say, in so many words, more than once, that the ideological motivation and methodological problems they purport to find in others’ work is *not* grounds for dismissing that work. (Example: transition from p3 to p4.)

    4. [Eliezer] “… but when the studies are done by Democrats, Gross and Simmons don’t dismiss the studies because of their political agendas, but actually speak of the agendas approvingly!” That doesn’t seem to me to be an accurate description of, say, their discussion of the AFT report (pages 18-19), or the work of Zipp and Fenwick, of which they say “More problematic, from our point of view, is that the Zip and Fenwick article — much like the recent studies from the other side of the aisle that it aims to counter — is more concerned to make a political point than to fully and impartially address the distribution of political views”, etc., etc.

    Perhaps I am merely showing my own bias (or, as Eliezer suggests, naivety) here, but it seems to me that Robin, Eliezer and William may be adopting a double standard. If Gross and Simmons either criticize a study done by conservatives, or pass over it quickly, they are “dismissing” it; if they do the same to a study done by liberals, they are “approving” it.

  • Floccina

    Wouldn’t one assume that being a government employee as most college professors are (even those who are not, get government funding) would tend to make one be more pro government action.

  • http://entitledtoanopinion.wordpress.com/ TGGP

    I agreed with Robin’s reading, but I read his description before I read the pdf, which could have influenced my thinking.

    Also, didn’t McCarthy just go after people in government, like that bastion of pinko liberals known as the U.S Army? It was supposed to be people in the State department giving secrets to the Russkies, college professors wouldn’t have access to information that classified.

    For a funny example of inconsistency involving McCarthy, read about Murray Rothbard’s take on him. At first he praises him, not because of any real political affinity, but for the populist nature of his attack. Then he bemoans the effect he had on the Republican party of bringing in ethnic catholic anti-communists with little interest in the political tradition of liberalism i.e populists.

  • http://www.mccaughan.org.uk/g/ g

    One might assume that. Or one might assume that being a professor goes with being more open-minded and that that conflicts with being right-wing. Or one might assume that belonging to an institution with a pre-existing liberal bias pushes you towards liberalism. Or one might assume that professors are cleverer than average and therefore aren’t taken in by the stupidity of right-wing views. Or one might assume that professors are people who can’t face the real world and therefore retreat to the comfortable illusions of left-wing views. Or one might assume that since the US is something of a right-wing outlier in the “West” generally, professors tend to have a more cosmopolitan outlook and hence look like leftists from a US perspective. Or one might assume dozens of other things.

    In the absence of some actual *evidence* as to why US academia leans somewhat in a left/progressive/liberal direction relative to the rest of the US population, I don’t see that anything whatever is gained by offering just-so stories about why it might be. Especially as it seems that everyone offering them prefers stories that fit one of two templates: (1) academics are good/clever/imaginative/etc., and therefore have Good opinions; (2) academics are stupid/politicized/warped-by-their-circumstances/etc., and therefore have Bad opinions.

  • http://www.mccaughan.org.uk/g/ g

    (Er, that was of course a response to Floccina, not to TGGP. Incidentally, those wanting a good account of substantially the explanation she’s offering could do worse than reading Robert Nozick’s “Why do intellectuals oppose capitalism?”.)

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Regarding the correlation between leftists and academics, it is possible to be very agnostic or to embrace explanations on relatively slim clues. This range of caution is possible for many similar correlations as well, such as between gender and wages, gender and CEOs, race and politicians, and so on. What would be a bias is to leap quickly to a conclusion regarding one correlation, but be very reluctant to draw a conclusion about another correlation, all because you liked the first one and not the second.

  • William Newman

    g, defending against my complaint that the authors attack all the studies in the time period without even identifying them or saying what they’re attacking them for (except for having an agenda which is not antiMcCarthyism), you write “But they do talk about specific flaws of at least some of the research they see as agenda-driven, later on. (Pages 12-15, about Klein’s work.)” I see two problems with that.

    First, it’s an amusing contrast to how, when defending against Robin’s complaint that Klein is attacked, you write about the same paragraph “No mention of Klein in this paragraph.”

    Second, granting for the sake of argument that their criticisms of Klein’s work suffice to justify their claim that his work is low quality, it is still unreasonable to make a sweeping claim that multiple people’s work is of low quality based on analysis of one person’s work.

    To me the introduction still looks like it was written by people who have never had to learn how to construct a sound criticism, or who have been preaching to the choir in a political echo chamber for so long that the skill has atrophied.

    Right-wingers aren’t immune to writing stuff like that, but its stuff I’d associate with openly partisan right-wing organizations, not mainstream academia. Imagine a piece on the economic minimum wage which approved of an early study produced as political ammunition against the New Deal, then attacked an entire decade of recent studies as low-quality work beholden to the organized labor agenda. If I saw something like that come out of mainstream academia, I’d be very surprised.

    (And if you saw something like that, would you be responding to a complaint analogous to Robin’s by saying “sure, Card and Krueger worked in that decade, but that paragraph didn’t call them out by name”? And would you be responding to a complaint like mine by saying “but they did back up their dismissal of all that work! See, they had specific criticisms of Card and Krueger’s work!”?)

    On a separate point, when Eliezer wrote “when the studies are done by Democrats, Gross and Simmons don’t dismiss the studies because of their political agendas, but actually speak of the agendas approvingly” you responded with “That doesn’t seem to me to be an accurate description of, say, their discussion of the AFT report (pages 18-19), or the work of Zipp and Fenwick.” I believe Eliezer was writing about the introduction, and I believe his “approvingly” was in reference to the early anti-McCarthy work. Read it again: does it sound like they disapprove? (If they disapprove, couldn’t they have used the sentence “too late to be any help in the fight against McCarthy” to say something explicitly disapproving?) And while it is true that in the body of the paper there is criticism of Zipp and Fenwick, that doesn’t stop the introduction from ignoring it and giving the very strong impression that the right, and only the right, is guilty of a agenda-driven low-quality work on the subject.

  • Doug S.

    “Reality has a well-known liberal bias.” – Stephen Colbert

    😉

  • http://www.mccaughan.org.uk/g/ g

    William, please feel free to be amused by the fact that (1) when Robin said that the paper *starts with* a long rant attacking earlier work by people “such as my colleague Dan Klein” I observed that there’s no disapproving mention of Klein until page 12 (and there’s plenty of non-ranting between the start and page 12), and yet (2) I do in fact mention that Klein’s work is discussed with some disapproval at that point. I don’t quite see the joke, but I’m sure it’s an excellent one.

    Their comments about anti-McCarthy-ism don’t seem to me to be particularly evaluative; but assuming arguendo that they’re approving of it, it seems relevant to me that just about everyone now regards McCarthyism as a thoroughly bad thing that should have been immediately recognized as such. And that their actual complaint about the allegedly low-quality allegedly agenda-driven work coming from the right is more about the alleged low quality than about the alleged agenda, and that they think the work that was motivated by anti-McCarthy-ism was in fact of high quality. Of course there’s scope for skepticism about why they think that.

    Anyway. I’m not, perhaps despite appearances, undertaking to defend this new study from all criticisms, and actually I agree that some slant is discernible. I just don’t think Robin’s description of the introduction was at all reasonable.

    Attempting to bring this discussion back to something vaguely in the area of “overcoming bias”… This sort of thing is almost always presented in terms of statements like “Academics tend to be liberal” or “Professors are much more likely to be leftist”. Or more tendentious phrases like “academia’s left-*leanings*”. The assumption here is that the *right* point of reference is midway between the Republican and Democratic parties, or the average opinion of all US voters, or something. I think there is some danger of bias right there, as I think can be seen by considering (1) the results of a similar approach to the question of creationism and (2) what would happen if instead of the US population you took the world population or the population of the “Western” nations or just about any other halfway plausible group.

    Also, it seems like some people expect there to be a single explanation, preferably one that makes their side look good and the other side look bad, and I don’t see any reason for that expectation other than a bias towards single-factor explanations of things. And, indeed, single-factor strongly evaluative explanations. Eliezer’s posts “The scales of Justice, the notebook of Rationality” and “Politics is the mind-killer” are perhaps relevant here.

  • RTS

    The only thing funneier than the sociologists who ignores “group norms” in academia setting is the economists who turn a blind eye to goverment subsidies to academia.

  • William Newman

    g, you write “The assumption here is that the *right* point of reference is midway between the Republican and Democratic parties, or the average opinion of all US voters, or something.”

    Actually, one needn’t make that assumption in order to worry about bias in academia. Consider that even if one doesn’t believe that the *right* proportion of race or sex or religion or close blood relationship to major financial donors is equal to that of the general population, one can still believe it’s bad (pragmatically unwise, morally wrong especially when funded by taxes ostensibly paying for the public good of education and research, or both) to let such considerations trump the more usual notion of merit in academic hiring and promotion.

    One can also believe that fields dominated by partisans of one faction are overprone to disastrously silly groupthink in that partisan direction. Mainstream academia today seems to suffer from rather impressive left-slanted goofs which become obvious to all in hindsight, like a journal being respectable right up to the Sokal hoax, or like historians giving an award to _Arming America_, or like a large bloc of faculty cheering on prosecutorial misconduct in a politically charged criminal case. Am I just suffering from selection bias when I have trouble thinking of many such impressive goofs by mainstream academia which are politically charged in other directions? I understand the worry that not checking faculty factional affiliation would make a field susceptible to goofs of other factions. But I hold the hope that the more pronounced effect would be to reduce groupthink, and so to reduce the chance of impressive goofs in any direction.