Contrarian Excuses

On average, contrarian views are less accurate than standard views.  Honest contrarians should admit this, that neutral outsiders should assign most contrarian views a lower probability than standard views, though perhaps a high enough probability to warrant further investigation.  Honest contrarians who expect reasonable outsiders to give their contrarian view more than normal credence should point to strong outside indicators that correlate enough with contrarians tending more to be right.

Most contrarians, however, prefer less honest positions, like:

  1. “They Laughed At Galileo Too”  Many contrarians seem content merely to point out that contrarian views have sometimes turned out to be right.  Have they no higher aspirations?
  2. “Standard Experts Are Biased” Yes of course we can identify many biases that plausibly afflict standard experts.  But we can also see at least as many biases that plausibly afflict contrarians.  No fair assuming you are less biased just because you feel that way.
  3. “We’ve More Detail Than Critics” Some contrarians say that only explicitly offered detailed arguments and analysis should count; it shouldn’t matter who agrees or disagrees.  And since advocates usually offer more detail in support of their specific arguments than critics offer in response, they automatically win.  They may not have written up their arguments in a standard or accessible style, or published them in standard places, or even submitted them for publication.  But by their “how much stuff we’ve written/done” standard, they win.
  4. “Few Who Study Us Disagree” Some contrarians accept that who agrees or disagrees matters, but say only those who have reviewed most available detail should count.  Since critics have less patience than advocates for studying advocate detail, advocates win.  If many critics do read and reject them, advocates can just add more detail and then complain critics haven’t read that.  If critics do read more advocates can complain critics aren’t of the right sort, e.g., not enough math, sociology, or whatever.  There is usually some way to define “valid” critics so they are outnumbered by advocates.

If you want outsiders to believe you, then you don’t get to choose their rationality standard.  The question is what should rational outsiders believe, given the evidence available to them, and their limited attention.  Ask yourself carefully:  if most contrarians are wrong, why should they believe your cause is different?

(Inspired by this recent argument with Eliezer Yudkowsky.)

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Bill

    If a Contrarian agreed with this argument, would it be false?

    • http://zbooks.blogspot.com Zubon

      Don’t we often think of Robin as a contarian? So here we have a contrarian taking a contrary stance to the standard contrarian position.

      Meta, ho!

      • http://hanson.gmu.edu Robin Hanson

        I’ll try to live up to the standards I set here: either admit outsiders shouldn’t agree with me, or say what factors I think should persuade them to think my case better than the usual contrarian case. For example, on the future econ of robots, I say I’m just combining standard views of AI folks on the future of robots, with the standard views of economists on how such tech would change society. I’m only contrarian at that point in trying to combine standard views of multiple fields.

      • http://singinst.org/AIRisk.pdf Eliezer Yudkowsky

        So “I’m Just Trying To Apply Existing Standard Knowledge In A Novel Way” is a valid contrarian excuse then? Should we pay much more attention to any contrarian who offers this excuse?

        If someone says “I’m Just Combining The Knowledge Of These Two Fields” to make statements that most people from field A and field B would separately disagree with, should we believe them over any prestigious academics who are only from field A or only from field B?

      • michael vassar

        One reason that outsiders looking at Robin’s and Eliezer’s arguments might believe that Eliezer’s contrarian views are correct as opposed to Robins is that they might notice that Eliezer is less likely than Robin to ignore the stronger arguments against his positions, like, for instance, the one below.

        “If someone says “I’m Just Combining The Knowledge Of These Two Fields” to make statements that most people from field A and field B would separately disagree with, should we believe them over any prestigious academics who are only from field A or only from field B?”

      • http://hanson.gmu.edu Robin Hanson

        If I say “believe claim C because it is the standard view of field F, which is the closest academic field to claim C”, even that takes some work for a listener: to verify that C is in fact the standard view in field F, and that F is the closest field to C.

        If I say “believe claim C because it is directly implied by claims A and B, where A is the standard view of field G, which is closest to A, and B is the standard view of field H, which is closest to B”, it will clearly take more work for a listener to verify. But if C is directly enough implied by A and B, it shouldn’t take more than three times as much work as in the simple case above. I take a factor of three to be “small” given how much these cost factors can typically vary.

        “Applying existing standards in a novel way” sounds a bit too open ended to be the sort of thing outsiders could easily verify. But the more specific type of application I describe above, yes that seems a pretty reasonable standard: if one in general can trust field F on claims C for which that is the closest field, one can nearly as much trust claims C which are directly implied by claims A and B, each of which is supported by its own closest field.

      • http://yudkowsky.net/ Eliezer Yudkowsky

        FYI, I’ll go ahead and state that “I’m Just Applying Existing Knowledge in a Novel Way” actually ranks right up there with “These People Are Ignoring Their Own Goddamned Carefully Gathered Experimental Evidence” as contrarian statements that actually *do* cause me to sit up and take notice.

        Actually, the third statement is probably the most powerful trope I know of – it applies to the contrarian views of Robyn Dawes on psychiatry and Gary Taubes on dietary science, to name two unrelated-to-my-affairs contrarians who I’m willing to believe against the common wisdom of entire academic fields. Eric Falkenstein has made a similar statement (though I haven’t yet investigated) on financial returns, and of course this is what you say about medicine as well. So if I had to pick a single contrarian “excuse” that makes me sit up and take notice, it would be that one – though I of course cannot claim to come under this heading.

      • http://yudkowsky.net/ Eliezer Yudkowsky

        “third trope” = “they’re ignoring their own experiments”

      • http://hanson.gmu.edu Robin Hanson

        “The standard view in field F is that the simply-interpreted direct-evidence on question Q is answer A, but experts in F tend to favor a different answer B” does indeed raise a red flag. How often I wonder is A the right answer in such situations?

      • http://yudkowsky.net/ Eliezer Yudkowsky

        Also, “I’m Combining Standard Knowledge From Multiple Fields” would probably be the general meta heading title of cryonics (adequate judgment requires standard cryobiology + standard naturalism + standard physics (to extrapolate physical possibility of nanotech)).

        I think that if you don’t start out with the idea that the answer ought to just be “contrarians are always wrong”, and admit and confess that distinguishing the right contrarians is actually something that a lot of people have to do and is an important life skill in a world gone mad – e.g., pity the poor souls who are trying to fight obesity using the standard FDA food pyramid – then a list of good cues for right contrarians would be pretty useful.

        Multidisciplinarity is a strong good cue. So is saying, in a tone of healthy indignation, “Stop ignoring your own experimental evidence!” – I suppose it would be easy to fake, but I can’t think of anyone actually trying to fake it; so far it seems like a remarkable statistical indicator. “Really, I’m just applying standard knowledge” is another thing I hear a lot of smart contrarians say. Bad grammar (in native English speakers) is a remarkably strong, fast, and useful cue to browse elsewhere. People who explicitly talk about rationality are not uniformly sane but they certainly get my attention. And so on.

        But ultimately, I think, a lot of the meta stories behind successful (or very plausible) contrarianism are not super-compelling stories, just “The People Who Disagree With Me Are Making Snap Judgments Based On Things That Are Not Valid Evidence And That’s All There Is To Them”. Not that you believe anyone who says that. You look at their object-level story. You may have to be satisfied with that object-level story. Sometimes there isn’t any more to the meta story than that. And the absolutely clear poster child for this case is, of course, many-worlds.

      • http://yudkowsky.net/ Eliezer Yudkowsky

        Another really important cue is when the standard model leads to poor real-world performance. I haven’t necessarily heard any super-good contrarian models of macroeconomic finance, but I’m open to them because I think that if economists really understood what was going on, our civilization wouldn’t be this screwed up. A similar consideration would apply to general but *not* narrow AI – AGI is screwed up, narrow AI is not. Standard dietary recommendations were accompanied by an explosion in obesity rates, so we should be willing to entertain the hypothesis that standard dietary recommenders do not have a strong grasp on their subject matter. Etc.

      • http://hanson.gmu.edu Robin Hanson

        Yes of course it would be great to find indicators distinguishing more from less accurate contrarians – let us keep trying. But let us also be careful to test proposed indicators against a wide range of contrarian beliefs, not just our personal favorites.

        When the meta clues are not very compelling, we should admit that outsiders without time to study details should not be very compelled – they should be skeptical.

        The “standard experts fail” clue needs to be calibrated by how hard a problem is, which is a hard calibration to do.

      • http://yudkowsky.net/ Eliezer Yudkowsky

        Let us by all means confess that outsiders with no time to study should be skeptical. But if the outsiders were rational, they should agree that this sort of skepticism represents a default prior, which ought to be very unstable upon the presentation of further evidence (one way or another), and which is not further negative evidence of itself.

        In other words, this sort of outsider skepticism is not to be presented as “Why don’t more prestigious academics in your field agree with X?” because it is not that sort of disagreement.

      • http://cob.jmu.edu/rosserjb Barkley Rosser

        An example of the third trope,pushing what Elizier is saying
        a bit further, are the models coming out of econophysics.
        Forget Eric Falkenstein. He is a sideshow. You have a
        massive failure on the part of standard economics to deal
        with what has happened in financial markets. You have a
        new interdisciplinary discipline, econophysics, that puts
        forward alternative models, most of which remain unable
        to be published in leading economics journals. Not all of
        this is going to fly (and it may be too respectable in some
        circles already to be truly “contrarian”), but it fist some of
        the stories here, including the matter of pulling stuff from
        one discipline to apply to another.

      • Hal Finney

        Here are a couple of contrarian arguments I have run across, both relating to health, which have marks of superficial credibility:

        The Scientific Scandal of Antismoking

        Obesity Paradox

        The first makes the case that smoking is not bad for your health; the second (part of a series; follow the links re Obesity Paradox on the lower right sidebar) argues that obesity is good for you. Both have the superficial appearance of referencing scientific studies and claiming the the mainstream misrepresents the results.

        So, what is the burden of proof in deciding whether to start smoking or get fat? Will you be persuaded just by scanning these articles? Does their existence at least cause you to revise your opinion on these health questions?

        Or will you figure that experts must be aware of this evidence and will have taken it into account in making their recommendations, that these authors may be making their own misrepresentations, and that they may not be presenting evidence which argues against their favored view?

      • http://hanson.gmu.edu Robin Hanson

        Hal those are both disturbing, and persuasive, sources. The second one only address heart disease and not mortality more generally. I know that if one controls for both exercise and weight, weight doesn’t seem to matter for mortality.

      • http://timtyler.org/ Tim Tyler

        The main intervention that affects mortality in a positive manner is dietary energy restriction. That’s not about weight – but it is about energy intake.

  • http://singinst.org/AIRisk.pdf Eliezer Yudkowsky

    I’ll go with that classic contrarian demurral:

    “I’m Trying To Make A Stand On The Correctness Of Certain Specific Arguments, Not All This Personal Stuff.”

    • http://hanson.gmu.edu Robin Hanson

      You can’t escape the question: should neutral outsiders believe you, if they can’t review the details of your arguments? This outside question isn’t “personal”, though you might think it “meta.”

      • http://singinst.org/AIRisk.pdf Eliezer Yudkowsky

        If they know I’m some kinda master rationalist guy, then they might invest a couple of days in reviewing the arguments, which ought to be enough to start getting an impression of which direction the winds of evidence are blowing from, and whose court the ball has last been hit into.

      • Brian Jaress

        That was one of the most unsatisfactory rebuttals I have ever seen.

      • http://hanson.gmu.edu Robin Hanson

        So you think outsiders should have easy to evaluate evidence that you are more rational than most contrarians, which should entice more than the usual number of them to consider your detailed arguments?

      • http://singinst.org/AIRisk.pdf Eliezer Yudkowsky

        Sure. That evidence should be pretty easy to evaluate, and it’ll get even easier once I’ve written a popular book.

      • http://hanson.gmu.edu Robin Hanson

        An expert in field F making contrarian claim in another field G is better than just anyone making a contrarian claim in field G, but isn’t as good as an expert in G making a contrarian claim in G. And since most fields have some experts in them making contrarian claims, even then an outsider’s credence in this claim couldn’t rise much above the usual rate at which contrarian experts in such fields are right.

      • http://singinst.org/AIRisk.pdf Eliezer Yudkowsky

        As long as it gets them to read the object-level arguments, good enough. I never, ever claimed in the first place that I ought to carry the day on meta alone!

      • http://CommonSenseAtheism.com lukeprog

        But for almost everyone, with regard to the arguments MIRI gives about risks from AI, this would require enormous amounts of time and attention and study of new-to-them fields like computation and AI. Isn’t a healthy dose of epistemic learned helplessness appropriate for almost everyone trying to evaluate MIRI’s arguments?

      • Peter David Jones

        Outsiders only need to know two things: that the arguments presented are highly conjunctive, and that there is a wide range of opinion in all the fields involved.

      • VV

        And yet, people who have an actual expertise in the relevant fields, computer scientists and AI researchers, tend to reject your arguments, or at least think they are exaggerated, while you get most support, especially academic support, from philosophers and other non-experts.

      • Nick Tarleton

        You can’t escape the question: should neutral outsiders believe you, if they can’t review the details of your arguments?

        I admit I have a hard time seeing why this is such an interesting question.

  • http://spivonomy.blogspot.com/ Sam Wilson

    If I want outsiders to take me seriously, it might be a good idea to drop the “contrarian” moniker and get on with good work. A well-crafted argument can stand on its own without resorting to cheap debate tactics like ad hominem, or ab hominem, as the case may or may not be. If it turns out my position is correct, the truth-seeking nature of people will inevitably show my position to be correct, so the only matter is one of time horizon. The question becomes: will I be shown to be right in my own lifetime? And there we have status-seeking.

    There leaves a fine question: what value legacy? Would I be happier to be ridiculed during my professional career, yet have my name in a textbook a century hence or to be discredited post mortem, yet enjoy fame and adulation while I live?

    Another interesting question: is there value in the mere status of outsider? In some respects, such a position is quite liberating. One is free from having to make actual policy decisions, for example. There is more room for mental exercise as an outsider, as there are no limitations of convention. Hell, you can be crazy as a bedbug if you want to be, claiming that sentient reptiles from the Planet Thuban control the world’s great religions, eventually culminating in the Great Purge of 2012.

    True, David Icke is demonstrably wrong, but how much fun it must be to be him. If I were a guy like that, you can bet I’d use every underhanded argument in my grab bag I could lay my scaly hands on to hold my status close. Being that I’m not, I suppose I will have content myself with rigorous analysis based on reliable models and testable hypotheses, all supported by reliable empirical evidence. *sigh*

  • Douglas Knight

    If you want outsiders to believe you… The question is what should rational outsiders believe

    Whoa! that’s a big jump between those two sentences.

    • http://hanson.gmu.edu Robin Hanson

      OK, yes, I was presuming you’d want outsiders to reasonably believe you.

  • Pingback: Tweets that mention Overcoming Bias : Contrarian Excuses -- Topsy.com

  • michael vassar

    Eliezer’s childhood SATs are enough to warrant a bit of further investigation by those who are good enough at evaluating object level arguments that the cost of some object level evaluation is low. As a result, a number of people who are good at object level evaluation (as verified by standard academic or economic success) investigated them and were convinced enough to keep reading. This lead them to become more convinced, convinced enough to create a reasonably successful online community and non-profit. This is a stronger piece of evidence than the SAT scores. In my opinion it is a sufficiently strong piece of evidence that people who are not extremely busy and who have the ability to evaluate object level arguments of an ordinary physicist ought at this point to invest the time to examine the arguments on an object level for long enough to overwhelm the prior against contrarian views unless they have a very exaggerated belief in the strength of the prior against contrarian views. Reasonable outsiders who have not examined the object level arguments should not be convinced yet, or ever really by almost any contrarian claim. Unless a person has an unusual ability to evaluate, object level arguments they really have no rational options other than to accept the consensus, but likewise unless they have an unusual ability to follow object level arguments they can’t do even that. Also, these abilities, rare though they may be, are typically assumed on Less Wrong and on OB as part of what we mean by a ‘rationalist’ and it is odd to talk about what it is rational to do or to think in their absence, just like it is odd to discuss what it is rational for a person of very low intelligence to do or to think.

    • http://yudkowsky.net/ Eliezer Yudkowsky

      Childhood high test scores are excellent cues of employability and teachability, but explicitly mentioning them is often a poor cue for sanity when it comes to a contrarian theory.

      • Doug S.

        In my case, they’re a great cue for teachability but not so much for employability; I’m very much unemployable.

      • michael vassar

        It would be a TERRIBLY cue if you said it. It’s a perfectly good cue for a potential employer (hmm… I guess that’s me now) to use when considering who’s resume (in the form of blog posts, arguments that can be judged on the object level in your case) to look at further.

    • http://hanson.gmu.edu Robin Hanson

      “Our view has a lot of successful fans” is indeed a better indicators than its opposite, but even so most contrarian views of that description are wrong. It is far from clear we have evidence that LW or OB readers in fact have a substantially superior ability to choose between contrarian vs. standard views.

      • komponisto

        We’re talking about the ability to evaluate arguments on the object level here (right?), so the “contrarian vs. standard” distinction disappears. LW/OB readers probably do on average have a superior ability to choose between competing theories when only looking at arguments.

  • michael vassar

    By the way, I met Taubes and he seemed (almost certainly) sincere but not all that bright. Definitely not very erudite and not all that good at philosophy of science (while, as typical, assuming he knows all there is to know about it). The meta-level ad-hominem data wasn’t enough to convince me to pay attention to the object level argument, and Seth Roberts taking him seriously is definitely only VERY weak evidence. Taubes may be right, but I think it much more likely that he AND the consensus are very badly wrong than that he’s right. I just lack any contrasting theory of my own. In general I default to a) start with traditional diets of healthy countries b) try Seth Roberts stuff and see how I feel, c) pay a LOT of attention to taste, to how I feel immediately afterwords, and to how I feel for the rest of the day. I think that for best results I should probably pay more attention to elimination but generally don’t.

    • http://yudkowsky.net/ Eliezer Yudkowsky

      I believe that Taubes is the popularizer of a larger faction, and what convinces me (as against a whole field) is not his positive theory but his negative story about how the field ignored its own evidence on carbs and fats. E.g. Dawes doesn’t have a positive theory at all that I know of w/r/t psychoanalysis / Rorschach / etc., he just says the state of experimental evidence is “tested and disproven” and that psychiatrists discarded hundreds of replications in favor of how good it felt to trust their “clinical experience”.

  • http://www.youtube.com/watch?v=wX5II-BJ8hI Jonas

    If critics do read more advocates can complain critics aren’t of the right sort, e.g., not enough math, sociology, or whatever.

    This is true. Sociologists do not count as critics of the right sort. I mean, if you do sociology which answers the dominant social demands, you can make a living. But if you want to do sociology, not critical sociology, just plain rigorous sociology, well, you have to know you just became an artist when it comes to your income and your status in academia. But sociology isn`t a science in the first place (same goes for economics).

    • http://permut.wordpress.com/ Michael Bishop

      I agree that rigor is not valued enough in sociology, my discipline, but you are simply wrong to claim it will sink you, or to say that critical sociology is a ticket to income/status in the discipline.

      • http://video.google.com/videoplay?docid=-572378513 Jonas

        I never meant to say that critical sociology is the ticket.

        In recognition of the kiss-principle, I would say, It depends a lot on your social capital and your habitus derived from your social background. Luck is important, too. But this could just be my Bourdieu -phase.

        Social capital and habitus are just words, like silence. It is about understanding the phenomena they describe. I am not saying that mathematical sociology is useless. I liked the model building courses a lot.
        Sometimes, I am just not sure if I can learn something useful from mathematical opinion dynamics models or other sociological models, that try to squeeze social phenomena in deterministic frames. It is still questionable, if this will ever lead to predictability.

        There might be other ways of accomplishing that. The Lakota tribe says “when you discover you are riding a dead horse, the best strategy is to dismount”. But everybody has sunk costs, so it is difficult.

        I think rigor will only sink you, if you disagree with your professors at a too early stage of your academic journey. And I think for sociologists it is important to be amazed over and over again by the question, why social order is possible. And to stay open minded for everything . There is always something to learn.

  • Mike Howard

    Robin, to clarify through example, could you say what you do (that Eliezer or other smart contrarians don’t and should be doing) to convince outsiders and authorities that your contrarian views are worth their limited attention? As oppose to convincing them once you have their full attention, or admitting they shouldn’t agree with you, which is fair enough but may not be a useful solution in some cases.

    • http://hanson.gmu.edu Robin Hanson

      See the first comment thread above.

  • Pingback: uberVU - social comments

  • http://www.thefaithheuristic.com Justin Martyr

    Maybe you can’t ever get neutral outside observers. Instead your best bet is to appeal to low-status intellectuals – who are already desperate for a new strategy. If the contrarian view is correct then they can (finally) win dominance encounters with higher-status intellectuals.

    • Peter Twieg

      Instead your best bet is to appeal to low-status intellectuals – who are already desperate for a new strategy.

      As a grad student, I have to say that this pretty much sounds like grad school.

      Of course, achieving status by latching onto a contrarian view isn’t always mutually inclusive with achieving employment by latching onto a contrarian view..

    • http://yudkowsky.net/ Eliezer Yudkowsky

      Max Planck once claimed that science progresses because the old generation dies off and a new generation grows up familiar with the concepts. This notion has always tickled me because it implies that science’s ability to distinguish truth from falsehood lies primarily in the grad students.

      I find this plausible, actually. Grad students may simply be more approximately rational in the relevant sense.

  • jonathan

    I’m fascinated by how contrarian views, which all of us hold some of the time, can cross over into crankdom or crackpotism. The internet crackpot scale is useful, assigning points to references to “paradigm shift” and allusions to Newton (or Feynman), etc. Perhaps the most challenging line is persecution complex. In that regard, I’m seeing Levitt & Dubner cross the line into crankdom by referring to the thousands of climate scientists who vehemently disagree with their facile treatment of global warming – which Levitt has referred to in interviews as a simple issue to solve – as a “religious orthodoxy” which is persecuting them. It really interests me to see that a respectable academic economist can so easily fall into classic crank responses when challenged on contrarian views outside his realm of expertise.

    My only other comment is that much contrarian thinking is actually a different perspective on a subject that is either in flux or which is not completely understood. Consensus naturally forms around leading ideas and those resist until sufficient contrary evidence presents but that is simply not true when the root levels are known. It is one thing to argue about macroeconomics and another to think that Special Relativity is wrong.

  • http://www.rationalmechanisms.com richard silliker

    “Ask yourself carefully: if most contrarians are wrong, why should they believe your cause is different?”

    They do not have to believe. It is only impotant what I feel

    • http://www.rationalmechanisms.com richard silliker

      . It is only important what I feel

      How about an edit option?

      • http://permut.wordpress.com/ Michael Bishop

        If what you feel is the only thing that is important, then why comment?

  • Marc

    One additional thing that I always look for in a believable contrarian argument is a thorough understanding of why the conventional argument exists at all. If someone simply says that everyone else is completely stupid or wrong then they probably haven’t understood the conventional argument fully. If they give consideration to the conventional argument in a nuanced manner then I feel more comfortable taking their opinions seriously.

    • http://hanson.gmu.edu Robin Hanson

      This is a good heuristic, though expensive for outsiders to apply.

    • http://entitledtoanopinion.wordpress.com TGGP

      Katja Grace makes a similar point here.

    • http://yudkowsky.net/ Eliezer Yudkowsky

      I’ll express the contrarian view that you should be careful not to do too much of this. Explaining away the other person’s views is a cognitively aggressive sort of act, and it slows down the process of admitting that you’re wrong if you happen to be wrong. It’s not likely to help you get the job done on the object level. That’s why I’ll often just say “People are crazy, the world is mad” rather than going into detail. Another prime example of Occam’s Imaginary Razor.

      • http://hanson.gmu.edu Robin Hanson

        But do you understand the reasons most have for taking the standard view here? 🙂

      • http://yudkowsky.net/ Eliezer Yudkowsky

        Yes, it’s because doing elaborate deconstructions of other people’s psychology lets you feel superior to them.

        (This is not meant seriously, but it is meant to convey the reason for the caution!)

      • http://hanson.gmu.edu Robin Hanson

        It is hard to see how a specific analysis of the reasons for someone’s opinions could make you feel any more superior to them then you get by just dismissing them as “crazy.”

      • http://lesswrong.com/ Eliezer Yudkowsky

        Really? It seems to me that the amount of self-satisfaction scales pretty linearly with the length of the explanation of their psychology.

      • http://hanson.gmu.edu Robin Hanson

        Eliezer, well that can and should be tested!

      • Douglas Knight

        Eliezer’s warning that one should not do this so much seems to me to apply much more to kind of analysis that Robin is asking for than Marc’s suggestion.

  • Floodplain

    It’s not right to judge something based majority rules; for instance, most people are religious; it is conventional to assume the existence of the supernatural; materialists are contrarian. Whether the majority of people believe something or not is irrelevant to its truth value.

    By some standards, nearly all theories that are accepted today were once contrarian (gravity, evolution etc.); the scientific method was once contrarian. Different things are contrarian at different snapshots in time. Maybe contrarians are wrong more often than not, but the consensus view often started out contrarian.

    It’s also hard to define what the track record of contrarians is when it’s difficult to define it. For instance, should we limit the definition of contrarian to opposing view points in academia that are at least recognized (ie. 90% of geologists believed a river was formed by process A; 10% of geologists believe by process B?) or contrarian views anywhere (for instance, let’s say 90% of the lay public believe, from rumours spread by an individual that the river was an artificial creation by the government, or from church leaders that it was created by God 6000 years ago).

    It seems that you can either say contrarians are right or wrong most of the time just by redefining contrarian.

  • mjgeddes

    My own big contrarian view:

    ‘Bayesian Induction is merely a special case of categorization (analogical inference)’

    Being contrary about rationality itself means that a person cannot even neccesserily accept the standard premises of rational discourse to resolve the issue. Redefine this (recategorize that), shift a definition here and there and you shift rationality itself 😉

    • mjgeddes

      Paper: ‘Argument by Analogy’ (Juthe, ’05)

      Demonstrates that there are perfectly valid analogical arguments that cannot be converted into inductive (bayesian) or deductive form.

      • Steve

        Ain’t gonna trade a bottle of St. Germain for a 300kb pdf, but from the summary and introduction I don’t see where the author shows that arguments by analogy don’t depend on the probability that the explicit narrative is true, multiplied by the probability that it can be validly applied with sufficient precision to the implicit problem domain.

      • mjgeddes

        I’ts freely available here:

        27-page pdf version

        Example given in the paper, moral reasoning by analogy:

        ‘In seeking protection from Eastern’s creditor’s in bankruptcy court, Lorenzo (Chairman of financially troubled Eastern Airlines), is like a young man who killed his parents and then begged the judge for mercy because he was an orphan. During the last three years, Lorenzo has stripped Eastern of its most valuable assets and then pleaded poverty because the shruken structure was losing money’.

        To make the analogical relations redundant and apply Bayes,.you need, as you say to find; ‘the probability that the explicit narrative is true, multiplied by the probability that it can be validly applied with sufficient precision to the implicit problem domain.’

        The trouble is with the former probability, you can’t assign such a probability, because there is no precisely definable context-free moral statement you can make, there will always be counter-examples for given situations (See paper).

        Put simply, you can never cannot fully detach near-mode details from far-mode narrative, thus independent probabilities can’t be assigned. That’s why Bayes ultimately fails.

  • Patrick

    Your initial premise is wrong. Contrarian positions tend to win out a very small percentage of the time.

    This should make sense though, since we reward cooperation and going along more then we do being right. It is considered OK to fail in a way that lots of people fail at, but wrong to fail in a way that very few people fail at. This make the contrarian position much harder to play since the rewards are small in the long run, and the costs and risks are very high. This existence of social mores that reinforce consensus views and punish contrarian ones, thus acts as a transaction cost on the market place of ideas. One must have a lot of knowledge and social capital behind them to risk taking a contrarian opinion, and they have to have a preference towards taking risks. That said, there are some opportunities out there.

    The problem is in certain environments contrarian positions are too easy to take, the rewards for blindly showing skepticism are too high, and the time horizion for verification is too low. Another problem is audience, since amongst layman or non-experts one could make bad arguments in the form of ‘one-way hash argument’. These sound like contrarian positions, but they’re not. They’re a rhetorical tool that takes advantage of the difficulties in defending certain positions. This may be a no-true Scotsman fallacy though, and we’d need to come to an agreement on our definitions.

    Imagine a roulette wheel and and you and two mathematicians all agree to make a bet without any knowledge of what others will bet. You can bet red or non-red, with payouts split evenly amongst winners.
    What does it mean to be a contrarian here? Is it everyone who bets on red, or the minority position? What is the correct bet to maximize your payout?

  • http://www.thatsaterribleidea.com evizaer

    To add to fallacy #4:

    There’s clearly a catch 22 here. If you don’t study their view enough, the contrarian castigates you for ignorance; but the only people who have the time, patience, and interest to read all the necessary material to be “qualified” are those who want to believe in (or have a vested interest in) the contrary view in the first place.

  • Hal Finney

    I see a lot of principles and heuristics being advanced, but not much data. What are some examples of contrarian views in the past which turned out to be right, vs ones which turned out wrong? Could we have applied these various heuristics in the past and successfully weeded out the winners from the losers?

    To be honest, I am having trouble coming up with ideas in either category. Wrong contrarian views have probably been forgotten. To find correct ones, presumably every change in a mainstream view started off contrarian to some extent, but this is not helping me come up with many examples. Continental drift. Theory of evolution. Heliocentricism. The value of women and racial minorities. Surely there are more instructive cases available?

    • Hal Finney

      This thread is getting old but it is a topic of interest to me, so I wanted to post a few examples and links which I have run across. These are contrarian views which have been pretty decisively rejected:

      – AIDS denialism: AIDS is not caused by the HIV virus, but by environmental factors such as drug abuse

      – quantized redshift: redshifts of distant galaxies tend to fall close to multiples of certain values, contrary to most cosmological theories

      – cold fusion: loading deuterium into various metal compounds releases anomalous heat and radiation

      – laetrile: cures cancer

      On the other side are contrarian views which have become accepted. I think to count as contrarian, we need to have had a period of time in which the view was seen as disreputable or at least as unlikely. Sometimes evidence leads to a new model pretty quickly, as when cosmologists discovered in the 1990s that the universe’s expansion was accelerating rather than decelerating. Contrarian successes should look more like paradigm shifts, internal revolutions. A few possible examples:

      – punctuated equilibrium as a model for evolution

      – some fats are good for you, rather than all fat being unhealthy

      – behaviorism fails to explain most human behavior

      – monetarist economic theories replaced Keynesian (oh, wait, I mean it the other way around)

      A good source of physics-oriented contrarianism going back to the 1980s is John Cramer’s Alternate View columns. Astonishingly, I found that link in a posting I made in 1996, and it is still good (and still being updated).

      I’d still appreciate people adding others, if they run across this posting in the future.

      • http://CommonSenseAtheism.com lukeprog

        Bayesianism in stats?

        Non-collapse in QM?

        Tectonic plates?

  • Pingback: Overcoming Bias : Random Smoking Trials

  • Eric Falkenstein

    I know it’s an old thread, but I’m bored. Barkley Rosser states my straightforward testable alternative to standard theory, that risk is unrelated to return as a function of a utility function based on relative wealth, as opposed to absolute wealth, is a sideshow. It generates rather clear, testable, and important alternative to our current paradigm, which states that if you measure the right metric of wealth, covariance with that metric is positively and linearly related to expected returns.

    Rosser states econophysics is a promising alternative. I disagree. Of course, one can point to physics-like things in many stochastic models. Heck, the original Black-Scholes was derived via a differential equation used for the heat equation in physics. But as a field, econophysics generates an embarrassment of riches. Models that produce statistical properties–means, variances, jumps–‘like’ those we observe in financial time series. But that is too easy.

    Generating variance, jumps, phase shifts, is one thing, but to then assert these are laws being obeyed in real time is quite different than fitting them to the peso-dollar exchange rate in the 1980s. I haven’t seen any clean testable hypotheses generated from econophysics, only many papers showing how, with hindsight, various models can emulated the past. That’s not promising, anyone with Excel and a time series can come up with a fun model that has a high R2. If all you want to do is fit, atheoretical approaches are great for that. If you want to predict, you need a theory that restricts.

  • Pingback: Overcoming Bias : Beware “Consensus”?

  • http://grognor.blogspot.com/ Grognor

    I don’t know if it’s the beginning of a darker trend, but I’ve noticed a few contrarians using “inferential distance too large” as an excuse. It’s similar to “few who study us disagree”. It might actually be the same thing under a new name; I’m not sure. (Certainly it has the same motivational origin.)

    I hate how Eliezer introduced the concept and now I’m seeing it used as an excuse not to explain crazy viewpoints. (Not giving examples, for obvious reasons.)

  • Richard Boase

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      Allow me to belabor the obvious: this very old post merits re-reading—by its author.

      Robin argues that contrarian views are prima facie less plausible than conventional views.

      This obvious conundrum is seldom addressed by those (such as “contrarians”) who are rationally required to address it. Robin, in this posting, addresses it without claiming to resolve it.

      My solution is to distinguish between the near-mode concept of opinion and the far-mode concept of belief, recognizing that in controversy rationality is advanced when proponents advance their opinions (factoring out the evidence provided by the brute judgments of others) despite having contrary beliefs: knowing full well that they are probably wrong!http://tinyurl.com/6kamrjs .

      [I can’t in good faith recommend this to, say, Yudkowsky. He can’t as effectively ask people to contribute thousands of dollars to something he really doesn’t believe—or at least really shouldn’t believe. But note well: this means there’s deception and irrationality built right into the core of the MIRI project.]

  • Pingback: Holden Karnofsky on Transparent Research Analyses | Machine Intelligence Research Institute