People tend to (say they) believe what they expect that others around them will soon (say they) believe. Why? Two obvious theories:
A) What others say they believe embodies info about reality,
B) Key audiences respect us more when we agree with them
Can data distinguish these theories? Consider a few examples.
First, consider the example that in most organizations, lower level folks eagerly seek “advice” from upper management. Except that when such managers announce their plan to retire soon, lower folks immediately become less interested in their advice. Manager wisdom stays the same, but the consensus on how much others will defer to what they say collapses immediately.
Second, consider that academics are reluctant to cite papers that seem correct, and which influenced their own research, if those papers were not published in prestigious journals, and seem unlikely to be so published in the future. They’d rather cite a less relevant or influential paper in a more prestigious journal. This is true not only for strangers to the author, but also for close associates who have long known the author, and cited that author’s other papers published in prestigious journals. And this is true not just for citations, but also for awarding grants and jobs. As others will mainly rely on journal prestige to evaluate paper quality, that’s what academics want to use in public as well, regardless of what they privately know about quality.
Third, consider the fact that most people will not accept a claim on topic area X that conflicts with what MSM (mainstream media) says about X. But that could be because they consider the media more informed than other random sources, right? However, they will also not accept this claim on X when made by an expert in X. But couldn’t that be because they are not sure how to judge who is an expert on X? Well let’s consider experts in Y, a related but different topic area from X. Experts in Y should know pretty well how to tell who is an expert in X, and know roughly how much experts can be trusted in general in areas X and Y.
Yet even experts in Y are also reluctant to endorse a claim made by an expert in X that differs from what MSM says about X. As the other experts in Y whose respect they seek also tend to rely on MSM for their views on X, our experts in Y want to stick with those MSM views, even if they have private info to the contrary.
These examples suggest that, for most people, the beliefs that they are willing to endorse depend more on what they expect their key audiences to endorse, relative to their private info on belief accuracy. I see two noteworthy implications.
First, it is not enough to learn something, and tell the world about it, to get the world to believe it. Not even if you can offer clear and solid evidence, and explain it so well that a child could understand. You need to instead convince each person in your audience that the other people who they see as their key audiences will soon be willing to endorse what you have learned. So you have to find a way to gain the endorsement of some existing body of experts that your key audiences expect each other to accept as relevant experts. Or you have to create a new body of experts with this feature (such as say a prediction market). Not at all easy.
Second, you can use these patterns to see which of your associates think for themselves, versus aping what they think their audiences will endorse. Just tell them about one of the many areas where experts in X disagree with MSM stories on X (assuming their main audience is not experts in X). Or see if they will cite a quality never-to-be-prestigiously-published paper. Or see if they will seek out the advice of a soon-to-be-retired manager. See not only if they will admit to which is more accurate in private, but if they will say when their key audience is listening.
And I’m sure there must be more examples that can be turned into tests (what are they?).
> Yet even experts in Y are also reluctant to endorse a claim made by an expert in X that differs from what MSM says about X. As the other experts in Y whose respect they seek also tend to rely on MSM for their views on X, our experts in Y want to stick with those MSM views, even if they have private info to the contrary.
> These examples suggest that, for most people, the beliefs that they are willing to endorse depend more on what they expect their key audiences to endorse, relative to their private info on belief accuracy.
The examples listed here seem like broad strokes that aren't nuanced enough to be really correct, and even if they were, they dont generalize well to the broad conclusion. There are plenty of obvious reasons why experts might not want to contradict the opinions of news networks publicly besides the fact that they like to agree with their viewers. There's also plenty of fields I can name where experts are all too happy to deride the "MSM" for their explanations and suggestions - it's practically used as a bonding mechanism in my field, cyber security. I think you are overfitting here on specific, politically charged topics as a rationalist and economist, and ending up saying something nonsensical. Do you find physicists at George Mason are keen on endorsing pop sci articles?
Certainly correct as far as it goes. But it's not surprising; Trivers' analysis states that the "conscious" information we talk about and the "subconscious" information we use to make choices is not the same, and may be disjoint. (In practice, leading to the fine art of letting the conscious mind construct socially-acceptable descriptions of why one made particular choices.) Much of the difficult work of life is maximizing one's social status, which has little to do with expounding factually accurate models of how the world works.
A fine example is Barbara Tuchman's "The March of Folly", which is a catalog of incidents where polities persistently carries out policies that were disastrous. But they all have the same pattern: The actors in question were not primarily rewarded or punished based on the long-term success of the polity, but by internal political contests within the polity. E.g. in the run-up to the American Revolution, the parliamentary leaders of the U.K. were generally disposed to strike compromises that would keep the southern American colonies within the U.K. But the backbenchers insisted that Parliament stamp out any lack of submissiveness in the colonies. There is no evidence that any backbencher lost his seat over the failure of this strategy.
The interesting evidence would be where you can distinguish (1) choices people make that affect themselves but are not socially visible from (2) choices people make in public (ideally, ones that do not affect themselves much).