Smart Sincere Contrarian Trap

We talk as if we pick our beliefs mainly for accuracy, but in fact we have many social motives for picking beliefs. In particular, we use many kinds of beliefs as group affiliation/conformity signals. Some of us also use a few contrarian beliefs to signal cleverness and independence, but our groups have a limited tolerance for such things.

We can sometimes win socially by joining impressive leaders with the right sort of allies who support new fashions contrary to the main current beliefs. If enough others also join these new beliefs, they can become the new main beliefs of our larger group. At that point, those who continue to oppose them become the contrarians, and those who adopted the new fashions as they were gaining momentum gain more relative to latecomers. (Those who adopt fashions too early also tend to lose.)

As we are embarrassed if we seem to pick beliefs for any reason other than accuracy, this sort of new fashion move works better when supported by good accuracy-oriented reasons for changing to the new beliefs. This produces a weak tendency, all else equal, for group-based beliefs to get more accurate over time. However, many of our beliefs are about what actions are effective at achieving the motives we claim to have. And we are often hypocritical about our motives. Because of this, workable fashion moves need not just good reasons to belief claims about the efficacy of actions for stated motives, but also enough of a correspondence between the outcomes of those actions and our actual motives. Many possible fashion moves are unworkable because we don’t actually want to pursue the motives we proclaim.

Smarter people are better able to identify beliefs better supported by reasons, which all else equal makes those beliefs better candidates for new fashions. So those with enough status to start a new fashion may want to listen to smart people in the habit of looking for such candidates. But reasonably smart people who put in the effort are capable of finding a great many places where there are good reasons for picking a non-status-quo belief. And if they also happen to be sincere, they tend to visibly support many of those contrarian beliefs, even in the absence of supporting fashion movements with a decent chance of success. Which results in such high-effort smart sincere people sending bad group affiliation/conformity signals. So while potential leaders of new fashions want to listen to such people, they don’t want to publicly affiliate with them.

I fell into this smart sincere conformity trap long ago. I’ve studied many different areas, and when I’ve discovered an alternate belief that seems to have better supporting reasons than a usual belief, I have usually not hesitated to publicly embrace it. People have told me that it would have been okay for me to publicly embrace one contrarian belief. I might then have had enough overall status to plausibly lead that as a new fashion. But the problem is that I’ve supported many contrarian beliefs, not all derived from a common core principle. And so I’m not a good candidate to be a leader for any of my groups or contrarian views.

Which flags me as a smart sincere person. Good to listen to behind the scenes to get ideas for possible new fashions, but bad to embrace publicly as a loyal group member. I might gain if my contrarian views eventually became winning new fashions, but my early visible adoption of those views probably discourages others from trying to lead them, as they can less claim to have been first with those views.

If the only people who visibly supported contrarian views were smart sincere people who put in high effort, then such views might become known for high accuracy. This wouldn’t necessarily induce most people to adopt them, but it would help. However, there seem to be enough people who visibly adopt contrarian views for others reasons to sufficiently muddy the waters.

If prediction markets were widely adopted, the visible signals of which beliefs were more accurate would tend to embarrass more people into adopting them. Such people do not relish this prospect, as it would have them send bad group affiliation signals. Smart sincere people might relish the prospect, but there are not enough of them to make a difference, and even the few there are mostly don’t seem to relish it enough to work to get prediction markets adopted. Sincerely holding a belief isn’t quite the same as being willing to work for it.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • praxeologue

    I’m not sure ‘Smarter people are better able to identify good new fashion candidates.’ They sure dont have that reputation in the actual fashion field.

    • RobinHanson

      Yes, I meant they are better able to find more accurate beliefs, which all else equal are better candidates. I’ve edited the post to say this more clearly.

  • Anonymous

    Popular nonfiction writers solve some of this by introducing a trivial mutation to the idea, while insisting it is nontrivial. So they can claim originality. (“I was the one who connected the dots.”)

    • infidelijtihad

      I was the first person who I know who characterized as artificial intelligence the fact that human organizations have exhibited traits of information processing independent of the comprehension of the component actors/symbols. It happened on the day when I realized Islam was coming to the city where I lived and would strike it. I am well aware, as I was then, that the idea is not fundamentally novel. It was, however, something of a revelation to me as someone who was waiting for the advent of artificial intelligence to find that AI had already been evolving before Turing ever buggered (up a program)

  • Stephen Diamond

    Sincerity seems otiose. (Frankly, I think it’s a self-complimentary description.) You have smart people who are primarily interested in signaling their intellect and those who are primarily interested in signaling their group loyalty. (As you describe yourself, you belong to the former.) The greatest intellectual achievements occur where both drives are in high tension. “Sincerity” is merely high investment in intellect signaling versus affiliation signaling.

  • gwern

    You’ve posted about this many times before, and I increasingly think you’re right. A dramatic example of this was this year’s Nobel prize for nanotech – pointedly excluding Drexler. He’s not mentioned in most (all?) of the media coverage, and not so much as namechecked in any of the official award materials (only Feynman).

    • Pepper

      … I.J. Good and Bostrom weren’t original thinkers?

      If you want to give Yudkowsky credit, how about giving credit to the people who convinced him that he was a fool in the early days of SingInst?

      • gwern

        I like Bostrom a lot. I.J. Good… well, I’ve read his relevant article/essay several times and I’m still not too sure what to make of it or how good it actually is, but I guess he might win the trivia point of being the first inventor of the intelligence explosion (even if it seems like he had zero influence on anyone else until Luke or someone else dug him out of the archives a few years ago; nor was I hugely impressed by his statistics work compared to Savage, Raiffa/Schlaifer, or Jaynes). But my point was that if you read most things on AI risk in the past 3 or 4 years, popular or scholarly, you would get the impression that no one had thought seriously about the topic until Bostrom came along. Which of course is grossly misleading and incorrect and denies the many, many thinkers in the area (including but not limited to Yudkowsky) their proper rewards in terms of citation and being acknowledged as right and the mockery & attacks directed at them wrong. But they aren’t getting that – which exemplifies Hanson’s point here and in earlier posts about effective contrarianism: ‘it is dangerous to be right when established men are wrong’ or however Voltaire put it. It’s also dangerous to be right too *early*, and be right about too *much*.

      • RobinHanson

        You’d also get the impression that no one has criticized the key foom scenario assumptions used.

      • John Lawrence Aspden

        … many thinkers in the area (including but not limited to Yudkowsky)….

        Including ironically Drexler! And he seems to have had some jolly good ideas about how to get some superintelligent capabilities without just building an AGI.

      • entirelyuseless

        “being acknowledged as right”

        We don’t have any proof yet that any of that stuff was right. I expect to win $1,000 from Eliezer when his AI risk stuff is proven false.

      • Stephen Diamond

        What’s the exact bet, and what are the odds?

      • entirelyuseless

        It is on the Less Wrong bets registry (although I was using a different username at the time.)

        $10 by Unknown (paid in 2008) against $1000 inflation-adjusted paid by Eliezer Yudkowsky in the following event:

        “When someone designs a superintelligent AI (it won’t be Eliezer),
        without paying any attention to Friendliness (the first person who does
        it won’t), and the world doesn’t end (it won’t).”

        Later we specified some of those things. My original interpretation was simply “more intelligent than any human who has ever lived,” but Eliezer wanted that to be understood as “better than all humans at all known intellectual activities,” or something like that. So in the end I said he could be the judge of what was superintelligent.

        I did not mean by “without paying any attention to Friendliness,” that people would not try to be safe, but that they would not try to make the thing optimize for human values as a whole. That is not necessary, it will not happen, and the world will definitely not end on account of that. Eliezer seems to have interpreted it more as “without putting in much effort towards safety.” In any event I expect to win the bet even on a very loose interpretation. I expect to win simply by having him concede that he was mistaken about the need for attention to AI risk (assuming that AI happens within our lifetimes.)

      • infidelijtihad

        “AI” in the sense of nohuman intelligence constructed at least partially as outcome of human artifice, operating on scales beyond human comprehension and human survival: Extant for thousands of years.

      • entirelyuseless

        That may be the case, but the bet is not about that.

      • infidelijtihad

        It’s not about artificial intelligence as it is, so naturally you can’t expect to collect.

      • zarzuelazen27

        Unless a person has the appropriate status (i.e. being part of the ‘in’ crowd – e.g., having impressive academic credentials), no one is going to listen to their weird ideas.

        The promoter of non-standard ideas can expect to be ignored at best (if they’re polite), or else receive a big ‘status slap-down’ and get blackballed at worst (if they make the mistake of being too impolite).

        Only people who are ‘socially approved’ to suggest big ideas (e.g. famous professors). are tolerated.

        And even then it’s not enough to have correct ideas. They need to *sound* clever and impressive. Unless you can present the right ideas in a way that clearly signals you are very smart (e.g., impressive sounding mathematics) , you’re out of luck.

      • consider

        I can’t find much of anything original in Bostrom’s thinking.

    • RobinHanson

      I’m struck by your word “increasingly” – young people start out with a presumption against this, and must usually see personal evidence before they are convinced. Which means they continue to be fooled when young even when most of the old know otherwise. How can humanity get the young to learn from the old faster?

      • gwern

        Well, it probably would’ve helped if you had given any examples, but in all the posts I can think of, you only speak in generalities… Naturally it’s not going to be that convincing, and I don’t think I was *that* young when you began that theme.

        In addition, nanotech and AI risk and cryonics and Bitcoin have, for me, the same value that a preregistered study has over a regular study: if you gave examples, I wouldn’t know how representative they are or whether they were historically accurate (is Isaac Newton an example of being too weird and contrarian because of his alchemy research and theological heresies? after a good deal of reading, I think no, but it wouldn’t’ve been hard to convince a younger more ignorant me of yes), but following a small set of examples in depth for the long-term, I can be sure that I’m not being fed a big heaping mound of publication bias, selective emphasis, and anachronism. And one good datapoint is worth many bad ones.

      • Stephen Diamond

        What lesson do you hope to teach the youth? That they should withhold what they think is true because it might be personally dangerous? I’m not sure there’s more to say here than “I spit on that.” [Why are you so concerned about the (supposed) professional advancement of contrarian youth. Do impetuous contrarian youth create problems for teachers?]

        Put concretely, if you had acted on this “wisdom” when young, you would likely be incapable of contributing as much now.

        You’re turning into an old fogey who wants to shut the youth up. They have something to teach us (as in the song by the new Nobel Laureate in Literature).

      • RobinHanson

        I merely want the young to know the truth. I’m not saying what they should do with it.

  • Stephen Diamond

    Timing is essential. But why? Is the correct explanation the one offered, that excessive contrarianism poisons the well? An alternative explanation is that to make a contribution, one must know the right time. This is far from easy; it’s not a matter of self-restraint. In intellectual matters, one knows the right time because of membership in the leading intellectual networks. (Randall Collins.)

  • JamieNYC

    I’m tired so I read the title as “Smart Sincere Contrarian Trump”. Now, that would have been a great title!