70 For Me, 30 For You

A working paper by Ilan Yaniv says we do listen to others, but we weigh our opinion 70% and someone else’s equally qualified opinion 30%:

Suppose you are responsible for hiring someone to fill a job, and you initially had a strongly favorable opinion about a candidate but are told that a colleague of yours has a lukewarm opinion of the same candidate. … from your internal point of view, the two opinions are not on a par. Decision makers place more weight on beliefs for which they have more evidence. Because decision makers are privy to their own thoughts, but not to the reasons underlying an advisor’s opinion, they place a higher weight on their own opinion than on an advisor’s. Indeed, studies show that other things being equal, people discount others’ opinions and prefer their own, with the weights split roughly 70% on self and 30% on other; this balance changes when differences in ability or knowledge between self and other are made salient

Is this self-preference a bias?  The main excuse I can see is that you might need to use your detailed reasons to make more detailed choices.  For example, you might prefer job candidates where you know the reasons they are good, because those reasons could help you match them to tasks.  Without such an excuse, you need a better than average reason to think that your reasons are better than the average reasons of others you don’t see.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • conchis

    Random thought: would it be possible to justify some degree of underweighting of others opinions (to the extent that they are based on purely private information) as a means of incentivising communication?

  • Stuart Armstrong

    Should you be biased towards yourself as a means of gaining experience? If your decision is wrong, you can update your own judgement as you fully follow the reasoning.

    But if you took the decision on trust from someone else, and the decision is wrong, then the only experience you’ve gained is a negative one – give less trust to that person in future.

    Of the two option, generally, ending up with a better internal judgement system would put you in a better position the next time you have to decide.

    Relevant to all sort of teaching/learning, etc… Occasionally trying to prove the teacher wrong is very educational.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Stuart, to gain experience you should certainly bother to create an estimate, and compare it to the estimates of others and to reality. But that does not mean you should rely only on your judgment when making choices.

  • Carl Shulman

    Different colleagues may have different motives, e.g. wanting more members of their particular school of thought, seeking to increase the representation of left-handers, etc.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Carl, these job colleagues would usually deny that the reasons they disagreed was differing motives.

  • Carl Shulman

    Robin,

    If I prefer right-handed colleagues like myself, while my left-handed colleague prefers left-handed ones, and I think we are consciously honest and equally (unconsciously) biased, then I should average our factual beliefs about both right- and left-handed candidates, and then make a selection in accord with my preference. In cases like estimation of the length of a line in an experiment, this is clearly good policy.

    But protestations of conscious innocence are often not fully credible. Economists making public statements apparently do engage in widespread ‘preference falsification’:

    http://www.econjournalwatch.org/pdf/DavisCharacterIssuesAugust2004.pdf

    Where advisors are concerned with impression management, they may consciously refrain from offering important but taboo arguments. Or some people may conclude that the end of ensuring greater academic representation for left-handers justifies the means. Where this is so, introspection does provide reason to discount the beliefs of others (although it will only sometimes justify the reported widespread 70:30 split).

  • http://gabriel.mihalache.name/econ/ Gabriel M.

    People lie. And most of the time, you have no idea of what goes in their heads, why they say what they do. Also, by definition, half of them have an IQ lower than 100.

    We might also have to ponder the scary notion that experience is more than information. “Being there” matters.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    I was on a job panel today interviewing candidates to replace our departing administrator. It was a strong field and at the end of the day we are left with four people who all seem good picks. We seem to agree about the relative strenghs of the different candidates – one had more experience, one seemed especially intelligent but we were not sure how long she would stay, another was good all round but would likely require us going above the intended salary range, etc. The panel members have somewhat different rankings of the four top candidates because we seem to place different weights on different attributes. Herein lies a difficulty: it is not obvious whether or to what extent these different weights simply reflect different *preferences* or instead different *beliefs* about what is the most important attribute needed to do this kind of job well in some absolute sense. It seems that in practise, these factors can be difficult to disentangle.

  • rcriii

    If other people are likely to discount my judgement, then they are less likely to concede a point under disagreement. Isn’t there then some justification for me discounting _their_ opinion?

  • Yan Li

    I wonder whether the “self-preference” bias would disappear if I am lukewarm about a candidate, and my colleague is passionate about the same person. Is it possible that we are more biased toward the intensity of an opinion rather than toward its source?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    If you accept that you often pretend to give advice to advance the common good, but really give advice to advance your private good, then you can reasonably infer that others may be doing the same, and attribute a lot of disagreements in such situations to differing preferences. But for the sort of situations where you honestly think you are giving advice for a common good, you need a better than average reason to think that others are giving advice to advance private reasons.

    Rcriii, their stubbornness only justifies discounting their opinion to the extend that stubbornness is correlated with poor quality opinions, and that you are in fact less stubborn than they. They justify their stubbornness in terms of the stubbornness they anticipate to see from you.

  • http://www.pellucid.org Bob Knaus

    It seems to me that, if I am truly the decision-maker in a situation, I ought to give my own opinion considerably more weight than I do the opinions of others. After all, I am the one who has to live with the consequences of my decision. Weighing others’ opinions at 30% seems plenty to me.

    Now granted, I am the captain of my own sailboat. In my 44 years, I’ve only been employed once, otherwise I have been the owner and the boss. Never of anything grand, but enough that I’m typing this in the Bahamas.

    I would call 70/30 self-confidence, not bias. In moderation it should be a virtue.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Bob, the questions is what your having to live more with the consequences have to do with your more accurate. Perhaps you think you are trying harder to be right.

  • Carl Shulman

    Robin,

    “But for the sort of situations where you honestly think you are giving advice for a common good, you need a better than average reason to think that others are giving advice to advance private reasons.”

    What do you mean here? Introspection can provide you with confidence in your own conscious honesty approaching 1.0. In the absence of good lie detection capabilities, you will then have to discount any interlocutor’s belief by the prior probability that a random peer will be consciously deceptive. It seems that all you need is ‘some reason’ to assign a prior probability of deception greater than 0 to your peers, not a ‘better than average reason.’

    The literature on academic and professional misconduct (including the preference falsification work cited above), does appear to provide reason to discount others(although not as much as in the post, generally) .

    Anonymous surveys indicate that a large proportion of scientists, perhaps one third, consciously engage in academic misconduct.
    http://aapgrandrounds.aappublications.org/cgi/content/extract/14/3/28?ck=nck

    Your own post on image manipulation is pertinent: (http://www.overcomingbias.com/2006/12/see_but_dont_be.html)

    Business students frequently cheat:
    http://links.jstor.org/sici?sici=0022-0485(199724)28%3A1%3C3%3AUSCITF%3E2.0.CO%3B2-E

    Politicians…need I say more? Regardless:
    http://www.factcheck.org/

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Perhaps the 70/30 weighting reflects how much advise is influenced by differing preferences in typical advise situations? To save us the effort of thinking explicitly about friendly advisors’ preferences, we might have been endowed with this as a simple heuristic?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Carl, yes we see a lot of dishonest behavior in the world, but introspection is usually not a very good guide to seeing your own honesty. The line between conscious and unconscious dishonesty is nowhere near as clear as we often assume, and a large fraction of those people who do dishonest things think of themselves as basically honest people.

  • Carl Shulman

    Nick,

    The magnitude of the split would seem too be much too large in science (most experiments conducted by scientists who have engaged in misconduct are not fraudulent) and other areas with strong truth-supporting institutions. This should render it much more socially costly than in the ancient world.

    Robin,

    We can operationally define ‘conscious dishonesty’ as what people can detect themselves and self-report in the anonymous survey evidence, i.e. they can report it to themselves as well as to a pollster. Thinking of oneself as having the character trait of honesty is a very different thing from thinking that one is not consciously lying about a particular fact: the former claim is highly ambiguous and people can define the terms to taste, as ‘good driving’ is defined to mean ‘safe’ or ‘fast’ depending on personal preferences.

  • http://rationallongevity.blogspot.com/ Anne Corwin

    “Because decision makers are privy to their own thoughts, but not to the reasons underlying an advisor’s opinion, they place a higher weight on their own opinion than on an advisor’s.”

    Is it permissible in the hypothetical here to ask the advisor to provide an explanation or rationale for their opinion? In the real world, this would of course be possible. I don’t think that wanting to know someone’s reasoning is evidence of a self-preference bias — however, it *would* be self-preference bias if you only asked for explanations of opinions that did not match yours. Could making a practice of questioning the opinions of people that express opinions convergent with yours be a means of counteracting a tendency to form a bias of this sort?

  • Douglas Knight

    70:30 sounds tiny, both compared to other biases and compared to what I would have expected for this particular bias. I failed to track down the study, but I expect that it is due to either people being on best behavior during a study or due to the study being so artificial that people have to admit to themselves that the other person really is equally qualified.

  • Douglas Knight

    Of course, both my complaints apply to studies of other biases (although some study designs are better at hiding what is being studied), but this bias seems so simple and straightforward, that I think it might be more susceptible to being destroyed in a study.

  • http://gtziralis.googlepages.com/ George Tziralis

    So, the 70/30 rule could be a plausible hypothesis when trying to perform prediction markets’ simulations. For example, an agent’s predicted probability would be p = 0.7*own_belief + 0.3*market_price. Very interesting…

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Anne, even when you do ask for and hear reasons, you usually do not understand all the details of their reasons.

  • Stuart Armstrong

    Perhaps the 70/30 weighting reflects how much advise is influenced by differing preferences in typical advise situations?

    After all, I am the one who has to live with the consequences of my decision.

    That is a difference in preferences in most advice situations: since I am responsible for my decision, my penalty for error is larger than that of a colleague, who is simply motivated to do the best possible choice (while I am strongly motivated to avoid the worst).

    So there are many circumstances where we should give much more weight to others opinions, especially in low risk/high reward ventures.

    I noticed that Harvey and Fischer’s paper used a point reward system. If, instead, they were to deduct points from a set total, that could test whether people behave differently when it’s their “responsibility” (using status quo bias in our favour!).