Tag Archives: Biases

Social Proof, But of What?

People tend to (say they) believe what they expect that others around them will soon (say they) believe. Why? Two obvious theories:
A) What others say they believe embodies info about reality,
B) Key audiences respect us more when we agree with them

Can data distinguish these theories? Consider a few examples.

First, consider the example that in most organizations, lower level folks eagerly seek “advice” from upper management. Except that when such managers announce their plan to retire soon, lower folks immediately become less interested in their advice. Manager wisdom stays the same, but the consensus on how much others will defer to what they say collapses immediately.

Second, consider that academics are reluctant to cite papers that seem correct, and which influenced their own research, if those papers were not published in prestigious journals, and seem unlikely to be so published in the future. They’d rather cite a less relevant or influential paper in a more prestigious journal. This is true not only for strangers to the author, but also for close associates who have long known the author, and cited that author’s other papers published in prestigious journals. And this is true not just for citations, but also for awarding grants and jobs. As others will mainly rely on journal prestige to evaluate paper quality, that’s what academics want to use in public as well, regardless of what they privately know about quality.

Third, consider the fact that most people will not accept a claim on topic area X that conflicts with what MSM (mainstream media) says about X. But that could be because they consider the media more informed than other random sources, right? However, they will also not accept this claim on X when made by an expert in X. But couldn’t that be because they are not sure how to judge who is an expert on X? Well let’s consider experts in Y, a related but different topic area from X. Experts in Y should know pretty well how to tell who is an expert in X, and know roughly how much experts can be trusted in general in areas X and Y.

Yet even experts in Y are also reluctant to endorse a claim made by an expert in X that differs from what MSM says about X. As the other experts in Y whose respect they seek also tend to rely on MSM for their views on X, our experts in Y want to stick with those MSM views, even if they have private info to the contrary.

These examples suggest that, for most people, the beliefs that they are willing to endorse depend more on what they expect their key audiences to endorse, relative to their private info on belief accuracy. I see two noteworthy implications.

First, it is not enough to learn something, and tell the world about it, to get the world to believe it. Not even if you can offer clear and solid evidence, and explain it so well that a child could understand. You need to instead convince each person in your audience that the other people who they see as their key audiences will soon be willing to endorse what you have learned. So you have to find a way to gain the endorsement of some existing body of experts that your key audiences expect each other to accept as relevant experts. Or you have to create a new body of experts with this feature (such as say a prediction market). Not at all easy.

Second, you can use these patterns to see which of your associates think for themselves, versus aping what they think their audiences will endorse. Just tell them about one of the many areas where experts in X disagree with MSM stories on X (assuming their main audience is not experts in X). Or see if they will cite a quality never-to-be-prestigiously-published paper. Or see if they will seek out the advice of a soon-to-be-retired manager. See not only if they will admit to which is more accurate in private, but if they will say when their key audience is listening.

And I’m sure there must be more examples that can be turned into tests (what are they?).

GD Star Rating
loading...
Tagged as: , ,

Which biases matter most? Let’s prioritise the worst!

As part of our self-improvement program at the Centre for Effective Altruism I decided to present a lecture on cognitive biases and how to overcome them. Trying to put this together reminded me of a problem I have long had with the self-improvement literature on biases, along with those for health, safety and nutrition: they don’t prioritise. Kahneman’s book Thinking Fast and Slow represents an excellent summary of the literature on biases and heuristics, but risks overwhelming or demoralising the reader with the number of errors they need to avoid. Other sources are even less helpful at highlighting which biases are most destructive.

You might say ‘avoid them all’, but it turns out that clever and effort-consuming strategies are required to overcome most biases; mere awareness is rarely enough. As a result, it may not be worth the effort in many cases. Even if it were usually worth it, most folks will only ever put a limited effort into reducing their cognitive biases, so we should guide their attention towards the strategies which offer the biggest ‘benefit to cost ratio’ first.

There is a bias underlying this scattershot approach to overcoming bias: we are inclined to allocate equal time or value to each category or instance of something we are presented with, even if they are arbitrary, or at least not a good signal of their importance. Expressions of this bias include:

  • Allocating equal or similar migrant places or development aid funding to different countries out of ‘fairness’, even if they vary in size, need, etc.
  • Making a decision by weighing the number, or length, of ‘pro’ and ‘con’ arguments on each side.
  • Offering similar attention or research funding to different categories of cancer (breast, pancreas, lung), even though some kill ten times as many people as others.
  • Providing equal funding for a given project to every geographic district, even if the boundaries of those districts were not drawn with reference to need for the project.

Fortunately, I don’t think we need tackle most of the scores of cognitive biases out there to significantly improve our rationality. My guess is that some kind of Pareto or ’80-20′ principle applies, in which case a minority of our biases are doing most of the damage. We just have to work out which ones! Unfortunately, as far as I can tell this hasn’t yet been attempted by anyone, even the Centre for Applied Rationality, and there are a lot to sift through. So, I’d appreciate your help to produce a shortlist. You can have input through the comments below, or by voting on this Google form. I’ll gradually cut out options which don’t attract any votes.

Ultimately, we are seeking biases that have a large and harmful impact on our decisions. Some correlated characteristics I would suggest are that it:

  • potentially influences your thinking on many things
  • is likely to change your beliefs a great deal
  • doesn’t have many redeeming ‘heuristic’ features
  • disproportionately influences major choices
  • has a large effect substantiated by many studies, and so is less likely the result of publication bias.

We face the problem that more expansive categories can make a bias look like it has a larger impact (e.g. ‘cancer’ would look really bad but none of ‘pancreatic cancer’, ‘breast cancer’, etc would stand out individually). For our purposes it would be ideal to group and rate categories of biases after breaking them down by ‘which intervention would neutralise this.’ I don’t know of such a categorisation and don’t have time to make one now. I don’t expect that this problem will be too severe for a first cut.

GD Star Rating
loading...
Tagged as: , ,