23 Comments

Perhaps the 70/30 weighting reflects how much advise is influenced by differing preferences in typical advise situations?

After all, I am the one who has to live with the consequences of my decision.

That is a difference in preferences in most advice situations: since I am responsible for my decision, my penalty for error is larger than that of a colleague, who is simply motivated to do the best possible choice (while I am strongly motivated to avoid the worst).

So there are many circumstances where we should give much more weight to others opinions, especially in low risk/high reward ventures.

I noticed that Harvey and Fischer's paper used a point reward system. If, instead, they were to deduct points from a set total, that could test whether people behave differently when it's their "responsibility" (using status quo bias in our favour!).

Expand full comment

Anne, even when you do ask for and hear reasons, you usually do not understand all the details of their reasons.

Expand full comment

So, the 70/30 rule could be a plausible hypothesis when trying to perform prediction markets' simulations. For example, an agent's predicted probability would be p = 0.7*own_belief + 0.3*market_price. Very interesting...

Expand full comment

Of course, both my complaints apply to studies of other biases (although some study designs are better at hiding what is being studied), but this bias seems so simple and straightforward, that I think it might be more susceptible to being destroyed in a study.

Expand full comment

70:30 sounds tiny, both compared to other biases and compared to what I would have expected for this particular bias. I failed to track down the study, but I expect that it is due to either people being on best behavior during a study or due to the study being so artificial that people have to admit to themselves that the other person really is equally qualified.

Expand full comment

"Because decision makers are privy to their own thoughts, but not to the reasons underlying an advisor's opinion, they place a higher weight on their own opinion than on an advisor's."

Is it permissible in the hypothetical here to ask the advisor to provide an explanation or rationale for their opinion? In the real world, this would of course be possible. I don't think that wanting to know someone's reasoning is evidence of a self-preference bias -- however, it *would* be self-preference bias if you only asked for explanations of opinions that did not match yours. Could making a practice of questioning the opinions of people that express opinions convergent with yours be a means of counteracting a tendency to form a bias of this sort?

Expand full comment

Nick,

The magnitude of the split would seem too be much too large in science (most experiments conducted by scientists who have engaged in misconduct are not fraudulent) and other areas with strong truth-supporting institutions. This should render it much more socially costly than in the ancient world.

Robin,

We can operationally define 'conscious dishonesty' as what people can detect themselves and self-report in the anonymous survey evidence, i.e. they can report it to themselves as well as to a pollster. Thinking of oneself as having the character trait of honesty is a very different thing from thinking that one is not consciously lying about a particular fact: the former claim is highly ambiguous and people can define the terms to taste, as 'good driving' is defined to mean 'safe' or 'fast' depending on personal preferences.

Expand full comment

Carl, yes we see a lot of dishonest behavior in the world, but introspection is usually not a very good guide to seeing your own honesty. The line between conscious and unconscious dishonesty is nowhere near as clear as we often assume, and a large fraction of those people who do dishonest things think of themselves as basically honest people.

Expand full comment

Perhaps the 70/30 weighting reflects how much advise is influenced by differing preferences in typical advise situations? To save us the effort of thinking explicitly about friendly advisors' preferences, we might have been endowed with this as a simple heuristic?

Expand full comment

Robin,

"But for the sort of situations where you honestly think you are giving advice for a common good, you need a better than average reason to think that others are giving advice to advance private reasons."

What do you mean here? Introspection can provide you with confidence in your own conscious honesty approaching 1.0. In the absence of good lie detection capabilities, you will then have to discount any interlocutor's belief by the prior probability that a random peer will be consciously deceptive. It seems that all you need is 'some reason' to assign a prior probability of deception greater than 0 to your peers, not a 'better than average reason.'

The literature on academic and professional misconduct (including the preference falsification work cited above), does appear to provide reason to discount others(although not as much as in the post, generally) .

Anonymous surveys indicate that a large proportion of scientists, perhaps one third, consciously engage in academic misconduct.http://aapgrandrounds.aappu...

Your own post on image manipulation is pertinent: (http://www.overcomingbias.c...

Business students frequently cheat:http://links.jstor.org/sici...

Politicians...need I say more? Regardless:http://www.factcheck.org/

Expand full comment

Bob, the questions is what your having to live more with the consequences have to do with your more accurate. Perhaps you think you are trying harder to be right.

Expand full comment

It seems to me that, if I am truly the decision-maker in a situation, I ought to give my own opinion considerably more weight than I do the opinions of others. After all, I am the one who has to live with the consequences of my decision. Weighing others' opinions at 30% seems plenty to me.

Now granted, I am the captain of my own sailboat. In my 44 years, I've only been employed once, otherwise I have been the owner and the boss. Never of anything grand, but enough that I'm typing this in the Bahamas.

I would call 70/30 self-confidence, not bias. In moderation it should be a virtue.

Expand full comment

If you accept that you often pretend to give advice to advance the common good, but really give advice to advance your private good, then you can reasonably infer that others may be doing the same, and attribute a lot of disagreements in such situations to differing preferences. But for the sort of situations where you honestly think you are giving advice for a common good, you need a better than average reason to think that others are giving advice to advance private reasons.

Rcriii, their stubbornness only justifies discounting their opinion to the extend that stubbornness is correlated with poor quality opinions, and that you are in fact less stubborn than they. They justify their stubbornness in terms of the stubbornness they anticipate to see from you.

Expand full comment

I wonder whether the "self-preference" bias would disappear if I am lukewarm about a candidate, and my colleague is passionate about the same person. Is it possible that we are more biased toward the intensity of an opinion rather than toward its source?

Expand full comment

If other people are likely to discount my judgement, then they are less likely to concede a point under disagreement. Isn't there then some justification for me discounting _their_ opinion?

Expand full comment

I was on a job panel today interviewing candidates to replace our departing administrator. It was a strong field and at the end of the day we are left with four people who all seem good picks. We seem to agree about the relative strenghs of the different candidates - one had more experience, one seemed especially intelligent but we were not sure how long she would stay, another was good all round but would likely require us going above the intended salary range, etc. The panel members have somewhat different rankings of the four top candidates because we seem to place different weights on different attributes. Herein lies a difficulty: it is not obvious whether or to what extent these different weights simply reflect different *preferences* or instead different *beliefs* about what is the most important attribute needed to do this kind of job well in some absolute sense. It seems that in practise, these factors can be difficult to disentangle.

Expand full comment