Smart Sincere Contrarian Trap

We talk as if we pick our beliefs mainly for accuracy, but in fact we have many social motives for picking beliefs. In particular, we use many kinds of beliefs as group affiliation/conformity signals. Some of us also use a few contrarian beliefs to signal cleverness and independence, but our groups have a limited tolerance for such things.

We can sometimes win socially by joining impressive leaders with the right sort of allies who support new fashions contrary to the main current beliefs. If enough others also join these new beliefs, they can become the new main beliefs of our larger group. At that point, those who continue to oppose them become the contrarians, and those who adopted the new fashions as they were gaining momentum gain more relative to latecomers. (Those who adopt fashions too early also tend to lose.)

As we are embarrassed if we seem to pick beliefs for any reason other than accuracy, this sort of new fashion move works better when supported by good accuracy-oriented reasons for changing to the new beliefs. This produces a weak tendency, all else equal, for group-based beliefs to get more accurate over time. However, many of our beliefs are about what actions are effective at achieving the motives we claim to have. And we are often hypocritical about our motives. Because of this, workable fashion moves need not just good reasons to belief claims about the efficacy of actions for stated motives, but also enough of a correspondence between the outcomes of those actions and our actual motives. Many possible fashion moves are unworkable because we don’t actually want to pursue the motives we proclaim.

Smarter people are better able to identify beliefs better supported by reasons, which all else equal makes those beliefs better candidates for new fashions. So those with enough status to start a new fashion may want to listen to smart people in the habit of looking for such candidates. But reasonably smart people who put in the effort are capable of finding a great many places where there are good reasons for picking a non-status-quo belief. And if they also happen to be sincere, they tend to visibly support many of those contrarian beliefs, even in the absence of supporting fashion movements with a decent chance of success. Which results in such high-effort smart sincere people sending bad group affiliation/conformity signals. So while potential leaders of new fashions want to listen to such people, they don’t want to publicly affiliate with them.

I fell into this smart sincere conformity trap long ago. I’ve studied many different areas, and when I’ve discovered an alternate belief that seems to have better supporting reasons than a usual belief, I have usually not hesitated to publicly embrace it. People have told me that it would have been okay for me to publicly embrace one contrarian belief. I might then have had enough overall status to plausibly lead that as a new fashion. But the problem is that I’ve supported many contrarian beliefs, not all derived from a common core principle. And so I’m not a good candidate to be a leader for any of my groups or contrarian views.

Which flags me as a smart sincere person. Good to listen to behind the scenes to get ideas for possible new fashions, but bad to embrace publicly as a loyal group member. I might gain if my contrarian views eventually became winning new fashions, but my early visible adoption of those views probably discourages others from trying to lead them, as they can less claim to have been first with those views.

If the only people who visibly supported contrarian views were smart sincere people who put in high effort, then such views might become known for high accuracy. This wouldn’t necessarily induce most people to adopt them, but it would help. However, there seem to be enough people who visibly adopt contrarian views for others reasons to sufficiently muddy the waters.

If prediction markets were widely adopted, the visible signals of which beliefs were more accurate would tend to embarrass more people into adopting them. Such people do not relish this prospect, as it would have them send bad group affiliation signals. Smart sincere people might relish the prospect, but there are not enough of them to make a difference, and even the few there are mostly don’t seem to relish it enough to work to get prediction markets adopted. Sincerely holding a belief isn’t quite the same as being willing to work for it.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • praxeologue

    I’m not sure ‘Smarter people are better able to identify good new fashion candidates.’ They sure dont have that reputation in the actual fashion field.

    • http://overcomingbias.com RobinHanson

      Yes, I meant they are better able to find more accurate beliefs, which all else equal are better candidates. I’ve edited the post to say this more clearly.

  • Anonymous

    Popular nonfiction writers solve some of this by introducing a trivial mutation to the idea, while insisting it is nontrivial. So they can claim originality. (“I was the one who connected the dots.”)

    • infidelijtihad

      I was the first person who I know who characterized as artificial intelligence the fact that human organizations have exhibited traits of information processing independent of the comprehension of the component actors/symbols. It happened on the day when I realized Islam was coming to the city where I lived and would strike it. I am well aware, as I was then, that the idea is not fundamentally novel. It was, however, something of a revelation to me as someone who was waiting for the advent of artificial intelligence to find that AI had already been evolving before Turing ever buggered (up a program)

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    Sincerity seems otiose. (Frankly, I think it’s a self-complimentary description.) You have smart people who are primarily interested in signaling their intellect and those who are primarily interested in signaling their group loyalty. (As you describe yourself, you belong to the former.) The greatest intellectual achievements occur where both drives are in high tension. “Sincerity” is merely high investment in intellect signaling versus affiliation signaling.

  • http://www.gwern.net/ gwern

    You’ve posted about this many times before, and I increasingly think you’re right. A dramatic example of this was this year’s Nobel prize for nanotech – pointedly excluding Drexler. He’s not mentioned in most (all?) of the media coverage, and not so much as namechecked in any of the official award materials (only Feynman).

    • http://www.sanger.dk Pepper

      … I.J. Good and Bostrom weren’t original thinkers?

      If you want to give Yudkowsky credit, how about giving credit to the people who convinced him that he was a fool in the early days of SingInst?

      • http://www.gwern.net/ gwern

        I like Bostrom a lot. I.J. Good… well, I’ve read his relevant article/essay several times and I’m still not too sure what to make of it or how good it actually is, but I guess he might win the trivia point of being the first inventor of the intelligence explosion (even if it seems like he had zero influence on anyone else until Luke or someone else dug him out of the archives a few years ago; nor was I hugely impressed by his statistics work compared to Savage, Raiffa/Schlaifer, or Jaynes). But my point was that if you read most things on AI risk in the past 3 or 4 years, popular or scholarly, you would get the impression that no one had thought seriously about the topic until Bostrom came along. Which of course is grossly misleading and incorrect and denies the many, many thinkers in the area (including but not limited to Yudkowsky) their proper rewards in terms of citation and being acknowledged as right and the mockery & attacks directed at them wrong. But they aren’t getting that – which exemplifies Hanson’s point here and in earlier posts about effective contrarianism: ‘it is dangerous to be right when established men are wrong’ or however Voltaire put it. It’s also dangerous to be right too *early*, and be right about too *much*.

      • http://overcomingbias.com RobinHanson

        You’d also get the impression that no one has criticized the key foom scenario assumptions used.

      • John Lawrence Aspden

        … many thinkers in the area (including but not limited to Yudkowsky)….

        Including ironically Drexler! And he seems to have had some jolly good ideas about how to get some superintelligent capabilities without just building an AGI.

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        “being acknowledged as right”

        We don’t have any proof yet that any of that stuff was right. I expect to win $1,000 from Eliezer when his AI risk stuff is proven false.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        What’s the exact bet, and what are the odds?

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        It is on the Less Wrong bets registry (although I was using a different username at the time.)

        $10 by Unknown (paid in 2008) against $1000 inflation-adjusted paid by Eliezer Yudkowsky in the following event:

        “When someone designs a superintelligent AI (it won’t be Eliezer),
        without paying any attention to Friendliness (the first person who does
        it won’t), and the world doesn’t end (it won’t).”

        Later we specified some of those things. My original interpretation was simply “more intelligent than any human who has ever lived,” but Eliezer wanted that to be understood as “better than all humans at all known intellectual activities,” or something like that. So in the end I said he could be the judge of what was superintelligent.

        I did not mean by “without paying any attention to Friendliness,” that people would not try to be safe, but that they would not try to make the thing optimize for human values as a whole. That is not necessary, it will not happen, and the world will definitely not end on account of that. Eliezer seems to have interpreted it more as “without putting in much effort towards safety.” In any event I expect to win the bet even on a very loose interpretation. I expect to win simply by having him concede that he was mistaken about the need for attention to AI risk (assuming that AI happens within our lifetimes.)

      • infidelijtihad

        “AI” in the sense of nohuman intelligence constructed at least partially as outcome of human artifice, operating on scales beyond human comprehension and human survival: Extant for thousands of years.

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        That may be the case, but the bet is not about that.

      • infidelijtihad

        It’s not about artificial intelligence as it is, so naturally you can’t expect to collect.

      • Philip Goetz

        entirelyuseless, how did Eliezer expect to collect on a bet which he can win only if the world ends?

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        I already paid him, for that reason. We could have made his side $1,010 for that reason, but we didn’t bother given the huge odds in my favor.

      • zarzuelazen27

        Unless a person has the appropriate status (i.e. being part of the ‘in’ crowd – e.g., having impressive academic credentials), no one is going to listen to their weird ideas.

        The promoter of non-standard ideas can expect to be ignored at best (if they’re polite), or else receive a big ‘status slap-down’ and get blackballed at worst (if they make the mistake of being too impolite).

        Only people who are ‘socially approved’ to suggest big ideas (e.g. famous professors). are tolerated.

        And even then it’s not enough to have correct ideas. They need to *sound* clever and impressive. Unless you can present the right ideas in a way that clearly signals you are very smart (e.g., impressive sounding mathematics) , you’re out of luck.

      • consider

        I can’t find much of anything original in Bostrom’s thinking.

      • http://www.gwern.net/ gwern

        Really? His Simulation Argument is an original and insightful way to frame the old speculation ‘what if we’re all like programs in a giant computer man’; his anthropics book is the clearest thing I’ve read on it and he made contributions there; the ‘anthropic shadow’ and ‘probing the improbable’ were excellent contributions to making anthropics relevant to other areas; you have to admit that he popularized ‘existential risk’ and crystallized a lot of work around the term; his embryo selection paper with Shulman, while very simple applications of behavioral genetics, still brought more systematization to the topic than anyone else had bothered to do; and then there’s ‘astronomical waste’, the ‘reversal test’, the infinite ethics paper, the unilateralist’s curse, the whole brain emulation roadmap…

      • consider

        I had heard and read so much about existential risk before listening or reading Bostrom that I didn’t find anything original and can’t easily judge who is a popularizer. I’m not sure what his special insights are into the Simulation Argument, something that goes back decades, but I’ll go back and listen to those parts of his talks

        One problem I have listening to him speak is that I come away with the feeling that he doesn’t know enough much about the science aspect of strong A.I. Still, I’ll watch and read him in the future.

        Oh, I.J. Good discussing runaway super intelligence back in 1965, decades before Brostrom is *not* just a trivia point! That is what it means to be original! ((grin)) And I doubt I.J. Good was the first, although maybe in print that was easily accessible.

      • http://www.gwern.net/ gwern

        I am not sure why you are trying to judge a philosopher from his talks, rather than his papers and books (all of which are online and fully available to you, most from his homepage)… As far as I.J. Good goes, who deserves credit for discovering America, Leif or Christopher? My attitude is that of Lawrence Shepp:

        “Yes, but when I discovered it, it stayed discovered.” http://www.nytimes.com/2006/02/05/weekinreview/05kolata.html

      • consider

        Again , my first sentence starts: “I have heard and read…” and I meant to include that with Bostrom, although I haven’t read his academic papers. But when people give long presentations, and I’ve watched/listened to a few, the ideas are almost always in there in some detail.

        For example, I’ve watched and listened to about seven or eight of Hanson’s presentations / interview podcasts on Ems before his book came out. I bought his book as well, but you don’t need to in order to get 90% of his arguments. (I bought it in part so I could carefully to go through it but mostly to give it my brother for Christmas who will read the book, but I doubt will watch Robin’s talks.)

      • Dave Lindbergh

        If I recall correctly, Barrow and Tipler’s 1988 Anthropic Cosmological Principle anticipated most, if not all, of Bostrom’s Simulation Argument.

        I don’t have any strong reason to think it was original with them, either.

        https://www.amazon.com/Anthropic-Cosmological-Principle-Oxford-Paperbacks/dp/0192821474

      • Peter McCluskey

        Keith Henson (https://en.wikipedia.org/wiki/Keith_Henson) claims to have been the first to make the simulation argument (in the 1980s, while talking with Hans Moravec). I’d guess that Bostrom noticed discussions about it on the Extropians mailing list in the early to mid 90s.

      • Dave Lindbergh

        I heard Henson talking about it around then.

        Still, I’d be surprised if somebody else hadn’t come up with it decades earlier.

        For sure Bostrom was far from the first (tho he gets credit for popularizing the idea).

    • http://overcomingbias.com RobinHanson

      I’m struck by your word “increasingly” – young people start out with a presumption against this, and must usually see personal evidence before they are convinced. Which means they continue to be fooled when young even when most of the old know otherwise. How can humanity get the young to learn from the old faster?

      • http://www.gwern.net/ gwern

        Well, it probably would’ve helped if you had given any examples, but in all the posts I can think of, you only speak in generalities… Naturally it’s not going to be that convincing, and I don’t think I was *that* young when you began that theme.

        In addition, nanotech and AI risk and cryonics and Bitcoin have, for me, the same value that a preregistered study has over a regular study: if you gave examples, I wouldn’t know how representative they are or whether they were historically accurate (is Isaac Newton an example of being too weird and contrarian because of his alchemy research and theological heresies? after a good deal of reading, I think no, but it wouldn’t’ve been hard to convince a younger more ignorant me of yes), but following a small set of examples in depth for the long-term, I can be sure that I’m not being fed a big heaping mound of publication bias, selective emphasis, and anachronism. And one good datapoint is worth many bad ones.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        What lesson do you hope to teach the youth? That they should withhold what they think is true because it might be personally dangerous? I’m not sure there’s more to say here than “I spit on that.” [Why are you so concerned about the (supposed) professional advancement of contrarian youth. Do impetuous contrarian youth create problems for teachers?]

        Put concretely, if you had acted on this “wisdom” when young, you would likely be incapable of contributing as much now.

        You’re turning into an old fogey who wants to shut the youth up. They have something to teach us (as in the song by the new Nobel Laureate in Literature).

      • http://overcomingbias.com RobinHanson

        I merely want the young to know the truth. I’m not saying what they should do with it.

    • Philip Goetz

      Nick Bostrom was in the Extropians in the 1990s, whom I’d peg as the group with the largest (original influence / current media attention).

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    Timing is essential. But why? Is the correct explanation the one offered, that excessive contrarianism poisons the well? An alternative explanation is that to make a contribution, one must know the right time. This is far from easy; it’s not a matter of self-restraint. In intellectual matters, one knows the right time because of membership in the leading intellectual networks. (Randall Collins.)

    • http://overcomingbias.com RobinHanson

      Yes of course. Timing is essential to sending a good signal. In my post I was talking about the social value of timing, instead of the personal signaling value.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        I was referring to (per Collins) social value. An ill-timed contribution is typically ignored. Social value demands social influence.

  • JamieNYC

    I’m tired so I read the title as “Smart Sincere Contrarian Trump”. Now, that would have been a great title!

  • Philip Goetz

    I’ve been wondering about the large disparity between the distribution of intelligence I see among famous intellectuals, many of whom are not very smart, and among my personal acquaintances, some of whom seem to be smarter than any famous person in the world whom I don’t know. I’ve been at parties where I think Plato would have been in the second quartile of intelligence. So I’ve suspected that some mechanism consistently and powerfully filters out the greatest intellects from positions of power, or even tenure. This is a candidate mechanism.

    • https://entirelyuseless.wordpress.com/ entirelyuseless

      I’m pretty sure that you are simply overestimating the intelligence of people you know, and underestimating the intelligence of people you don’t know.