Errors, Lies, and Self-Deception

About a recent European Journal of Personality article:

The participants recorded a one minute television commercial, … then watched … themselves, having been given guidance on non-verbal cues that can reveal how extraverted or introverted a person is. … They were then asked to rate their own personality. … The participants’ extroversion scores on the implicit test showed no association with their subsequent explicit ratings of themselves, and there was no evidence either that they’d used their non-verbal behaviours (such as amount of eye contact with the camera) to inform their self-ratings.

In striking contrast, outside observers who watched the videos made ratings of the participants’ personalities that did correlate with those same participants’ implicit personality scores, and it was clear that it was the participants’ non-verbal behaviours that mediated this correlation … Two further experiments showed that this general pattern of findings held even when participants were given a financial incentive.

[Folks seem] extremely reluctant to revise their self-perceptions, even in the face of powerful objective evidence. … Participants seemed able to use the videos to inform their ratings of their “state” anxiety (their anxiety “in the moment”) even while leaving their scores for their “trait” anxiety unchanged.

(Hat tip to Michael Webster.)  This sort of thing terrifies me.   Let me explain why.

Any long complex design or calculation is subject to errors.  And those who do such things regularly must get into the habit of testing and checking for such errors.  This may take most of the effort, but it is at least manageable, because we expect that such errors are not very correlated with other features of interest.   If something has worked ten times in a row in field tests, it will probably work the first time for a customer, at least if that customer’s environment is not too different from field test environments.

People who have to worry about spies and liars, on the other hand, have to worry more about troublesome correlations.  Liars can coordinate their lies to tell a consistent story.  Spies and liars can choose carefully to betray us exactly when such defections are the hardest to detect and the most expensive.  So the fact that a possible spy performed reliably ten times in a row gives less confidence that he will also perform reliably the next time, if the next time is unusually important.  In these cases we rely more on private info, i.e., what the spy or liar could not plausibly know.   For example, if we do not let the possible spy know which are the important cases, he can’t choose only those cases to betray us.   And if we can check on him at unexpected times, we might catch him in a lie.

We humans have many conscious beliefs, and we are built to have accurate ones in many situations, but in many other situations we are built to have misleading conscious beliefs, i.e., to be self-deceived.   Evolution judged that such misleading beliefs would tend to help us fool our colleagues, and so better survive and reproduce.   It created subconscious mental processes to manage this process of deciding when our beliefs should be accurate or misleading.

We seem almost completely defenseless against such manipulation.  Yes we can try to check our conscious beliefs against outside standards, but our subconscious liars can not only choose carefully when to lie about what, but they probably also have access to all our conscious thoughts and info!  They might even lie to us about whether we checked our beliefs, and what those checks found.  So in principle our unconscious liars can execute extremely complex and subtle lying plans.  For example, the study above suggests that such processes choose to make us blind to clues about our average public speaking anxiety, while letting us see momentary fluctuations about that average.

If our subconscious liars were as smart and thoughtful as our conscious minds, we would seem to be completely at their mercy.  The situation may not be that bad, but it is not clear how we can tell just how bad the situation is; even if they had complete control, they would probably want us to think otherwise.

This is the context in which I find myself interested in “minimal rationality,” similar to minimal morality.  In the limit of my being subject to very powerful subconscious liars, how can I best avoid their distortions?  It seems I should then become especially distrustful of intuition, and especially interested in trustworthy processes outside myself, such as prediction markets and formal analysis.

If I have a choice between two ways to make an estimate, and one of them allows more discretion by subconscious mental processes, I should try to go with the other choice if possible.   If the data is pretty clear and theory needs a lot of judgment calls to get an answer, I go with the data.  If the data is messy and needs judgement calls while standard theory gives a pretty clear answer, I go with that theory.

Of course this minimal rationality approach makes me subject to my subconscious lying about which estimates allow more subconscious discression.  So I need to be especially careful about those judgments.  But what else can I do?

Many folks figure that if evolution planned for them to believe a lie, they might as well believe a lie; that probably helps them acheive their goals.  But I want, first and foremost, to believe the truth.

GD Star Rating
a WordPress rating system
Tagged as: ,
Trackback URL:
  • Hal Finney

    I agree that this is a really tough problem, and that the first step of disbelieving your own reasoning is the hardest. (People love to point out that if you distrust your reasoning, how much trust can you put in your conclusion that you are untrustworthy? This area is full of amusing paradoxes.)

    It would be great if we had knowably trustworthy institutions out there, but we live in the real world. The big question is whether imperfect institutions are going to do a better job on a given issue than my own imperfect reasoning. Kind of reminiscent of the debate about whether market failures or government failures are worse.

    One important factor IMO is that many issues that we fret over are largely irrelevant to our lives. The ones that matter tend to be personal and local. No Idea Futures market is going to tell you if you should break up with your girlfriend. It might tell you whether a given government health plan will save money, but virtually nobody should care about that.

    Probably the best source of unbiased advice on personal matters, as hinted in the article, is simply to ask people and take what they say to heart. Of course, this is easy to say but if the effect in the article is strong enough, it’s going to be hard to implement.

    I know Robin has at least occasionally put this into effect; from time to time he has solicited opinions on where he should be putting his professional efforts. That would be a good model to follow.

  • http://michaelkenny.blogspot.com Mike Kenny

    Why would you look at external cues to tell you how extraverted you are when you have a lot of experience yourself–why are outsiders the ones who have the say in how extraverted you are? Why are external cues valued over a person’s assessment of his own preferences? Isn’t extraversion determined by self reports anyway regarding personality?

    Why is the truth so important? Thought experiment: if god came down to you and said ‘I’ll tell you the whole truth, but you’ll suffer in hell for eternity, or you can know nothing but falsity, but live in heaven for eternity,’ wouldn’t you choose falsity? Doesn’t this suggest the feeling of happiness is more important than truth ultimately? It seems to me truth is a means to an end and has a cost.

  • Anonymous

    Your example isn’t too useful.

    Yes, of course in that thought experiment, I’d choose heaven/falsity. That does indeed suggest that in the limit of infinite suffering versus infinite happiness, that choice is more important than truth. That does not mean that this comparison holds once you bring it down to human levels – find out a little bit extra at the cost of… what? You haven’t shown that it has a cost or that, if it does have a cost, that this cost is at all significant.

  • HH

    A possible technique here is to apply a version of the Life/Dinner principle.

    I learned about the life/dinner principle from Richard Dawkins, though he cites others: the reason the rabbit is faster than the fox is that the fox is running for his dinner, but the rabbit is running for his life. That is, the rabbit is far more motivated to be good at what he does – in fact, he’s descended from other rabbits who evaded foxes long enough to reproduce. [The fox may well have descended from a line that never once caught a rabbit.]

    Similarly, for most of us, overcoming bias and knowing the truth is dinner: we want it, but we don’t have to have it and can live just fine without it. In any given situation, however, we can find someone for whom a particular truth or fact is life, or close to it. Identifying that person and substituting their judgment for ours can go a long way to avoiding self-deception.

    For example, the argument against government control of the economy is that planners lack the information and the incentives to do it right – a functioning economy for them is dinner. However, for firms in the market, obtaining information and responding to incentives is life. As a result, prices and production are best left to them.

    Of course, self-deception can simply enter in the selection of the proxy: if we’re adopting someone else’s judgment, our self-deception can simply make us pick the wrong person’s judgment. But the problem of self-deception is infinitely recursive, and can interfere whenever we make a choice, so all you can do is try to make the right selection of a proxy.

  • diogenes

    Although overcoming bias is an important goal — the cost shouldn’t be the quality of the estimate or result. I.e. the process you use to make decisions (no intiution) could significantly WORSEN your actually decisions.

    Intuitions about ourselves have a strong impulse for self-deception — but intuitions about arithmetic (ball parking), spatial location (i.e. where i parked the car?),taste (food), have less extent for deception. The point of one of Gladwell’s poorer books, Blink, was just this. Sometimes intuition performs SIGNIFICANTLY better than consciously thinking something through.

    If you have several mutually exclusive circles of friends, its not too hard to get feedback on your own personality that could be partially correct if told to you repeatedly.

  • CannibalSmith

    1. Subjects rated their own extroversion. Other people rated subjects’ extroversion.
    2. Subjects were taught conscious control of body language.
    3. Subjects recorded commercials.
    4. Subjects and other people watched the recordings.
    5. Subjects rated their in-recording extroversion. Other people rated subjects’ in-recording extroversion.
    6. Subjects didn’t report increased extroversion between step 1 and step 5, other people did.
    Did I get the article right?

  • http://hanson.gmu.edu Robin Hanson

    Hal, I wouldn’t rule out idea futures on breaking up with your girlfriend, but yes asking associates is often a good way to find things out, at least if they will be honest with you.

    HH, strong incentives can be good but that doesn’t mean they are incentives to be honest.

    • HH

      Agreed, I was implicitly conflating the two. I’m thinking of things like prediction markets, where real money is at stake, such that people have both a strong incentive to be right and a strong incentive to be honest.

  • http://transhumangoodness.blogspot.com/ Roko

    The subjects were just average idiots, with average intelligence and no rationality training. I would only be scared if such results could be reproduced on those of above average intelligence.

    • http://transhumangoodness.blogspot.com/ Roko

      It seems that intelligence is no defence against many biases.

      I am suitably more scared.

  • Joe Teicher

    Since they were undergraduates they are probably of at least a little above average intelligence, however, they are definitely below average age and life experience. To me it seems likely that people get a lot more accurate in their beliefs about themselves as they get older. Isn’t it possible that an undergraduate hasn’t been exposed to the variety of social experiences, with accurate feedback, that is necessary to get an accurate view of their own extraversion?

    I just don’t buy this practice of taking the psychological quirks of people who are barely out of childhood as representative of the human race in general, and I won’t buy into any of these results until they are reproduced on a much wider variety of people.

    • Jonnan

      Odd thought – what defines people that do, and don’t, get better at seeing though self-deception.

      I’ve noticed that there *are* many people that get better at not deceiving themselves, and continue that way. There are *also* people that get better at self-deception.

  • Carl Shulman

    Roko,

    IQ, at least, doesn’t strongly predict less bias. See Keith Stanovich.

    • http://transhumangoodness.blogspot.com/ Roko

      thanks.

      Link

      • http://transhumangoodness.blogspot.com/ Roko

        In 7 different studies, the authors observed that a large number of thinking biases are uncorrelated with cognitive ability. These thinking biases include some of the most classic and well-studied biases in the heuristics and biases literature, including the conjunction effect, framing effects, anchoring effects, outcome bias, base-rate neglect, “less is more” effects, affect biases, omission bias, myside bias, sunk-cost effect, and certainty effects that violate the axioms of expected utility theory. In a further experiment, the authors nonetheless showed that cognitive ability does correlate with the tendency to avoid some rational thinking biases, specifically the tendency to display denominator neglect, probability matching rather than maximizing, belief bias, and matching bias on the 4-card selection task. The authors present a framework for predicting when cognitive ability will and will not correlate with a rational thinking tendency.

  • Benquo

    A very helpful post — it clarified a lot of what you’ve said or implied elsewhere.

  • http://www.cawtech.freeserve.co.uk Alan Crowe

    I don’t think you can dodge the perils of biased judgment by refusing to use your judgment.

    I like to play Go and have noticed my personality interfering with my play. Early in the game it is important to boldly take the big points, dealing with attacks on ones stones with counter attacks, not defensive replies. I am a timid and anxious person so I often make slack moves that defend a group that is already strong enough to be left. Worse, if I attempt directly to be less timid I play rash moves rather than good ones.

    Sometimes I become vividly aware of what is happening. I recognise the situation as similar to a professional game that I have studied or as similar to that discussed in an instructional book. I know what move I’m supposed to play. My fears about a group I am anxious about crowd in on me. Will it be attacked and killed? It is an emotionally fascinating experience when I pluck up my courage, play the correct, scary move, and watch as my opponent makes a poor reply and gets into difficulty.

    Now wait a minute. Why do I bother with all this inner angst, facing down my fear, plucking up my courage, trying exercise my judgment in a way that is bold without being rash? Go is a logical game. Why not just think about my moves?

    The problem lies in the difference between strategy and tactics. One can certainly read out sequences: if I do this and he does that etc. It is important to do so. On the other hand the board is large and the options numerous. Strategy depends on judgment. These stones are light meaning that attacks can be dodged. These stones are thick meaning that if my opponent plays close to them I can attack him and if I play close to them I will be overconcentrated.

    When I try to think logically, instead of exercising my judgment, I am trying to dodge the problem of my timid nature leading to timid judgment, but what actually happens is that I read out sequences and let tactics dictate strategy. This usually ends badly.

    I draw a moral for real life. One is more likely to get crisp clear answers for tactical questions than for strategic ones. If your judgment has biases and you try to escape the bad consequences of those biases by favouring approaches that lead to clear answers that do not require the exercise of judgment, you create a new bias, towards the tactical tail wagging the strategic dog. Welcome to a new set of bad consequences.

  • http://bizop.ca/blog/how_would_you_play_that/ michael webster

    I thought that this was an important study, and relevant for those who study signalling. (Which is why I sent it to Robin, in the first place.)

    I have a different take on experiments, even though I find Robin’s observations very interesting.

    Signal theorists, it appears to me, assume that if I want to signal my type, X, through message Y, to a receiver, then I must have private access to my type.

    This experiment suggests otherwise: how I learn my type is to observe what other’s think my type is. There is no correct introspection available to me of my type, for certain types.

    It would like playing poker with your hand constantly changing based upon what other people thought you had, as described or inferred by their bets.

    (As a philosopher, I am generally attracted to paradoxes so I may have read into the experiment this very oddity. But, I wonder if others saw the same thing?)

  • Pingback: Interessantes woanders (2009.06.17) › Immersion I/O

  • A different Diogenes

    I think the findings are based on a false premise;

    [Folks seem] extremely reluctant to revise their self-perceptions, even in the face of powerful objective evidence

    An analysis of a one minute video of yourself using non-verbal cues would not be considered by the subjects as “powerful objective evidence”, even if it is. Therefore they will ignore it. Their error, if it is indeed an error, is in thinking that the non-verbal cue evidence is not a particularly useful measure of their level of extroversion which is hardly a surprising conclusion for them to draw.

  • http://hanson.gmu.edu Robin Hanson

    Roko, How do you decide which psychology results to believe?

    Alan, yes intuitions on the right topic can beat explicit thoughts on the wrong topic.

    Michael, people might be unconsciously aware of what they are consciously unaware.

    • http://transhumangoodness.blogspot.com/ Roko

      > Roko, How do you decide which psychology results to believe?

      It is clear that I am a more effective thinker than most people, so if a study is performed on a fairly average group, and they are shown to be lacking, I would be foolish to think that the results automatically carry over to me.

      However, Carl has shown me evidence that suggests that many biases afflict the intelligent as badly as the stupid. So perhaps I need to change strategy a touch.

      • http://circuitbreak.blogspot.com Nonzero

        Sometimes just asking yourself whether a belief is true or not helps. You must ask in a genuinely curious way, the way you’d ask a favorite professor whom you respect and expect to occasionally surprise you. If the answer comes out in less than a second, you can be sure it is your deceptive robotic part of the mind talking.

        Your feelings have a sphere of usefulness in decision making or else they wouldn’t be there. After you’ve verbalized the belief to be questioned, test how it feels to fully believe it vs how it feels to be convinced that the opposite is true. This will not determine whether the objective truth of the statement but it will certainly give you insight on which choice will be easier to live with.

    • http://bizop.ca/blog/how_would_you_play_that/ michael webster

      Robin, if you are right about this, it still demands a change to signal theory for some signals as the theory assumes the choice of type is known to the signaller.

    • http://www.cawtech.freeserve.co.uk Alan Crowe

      You are trying to decide between A and B. A is correct, but your subconscious is trying to get you to choose B. You try using your judgment and choose B. Wait! Intuition is untrustworthy. You should use a trustworthy process outside yourself such as formal analysis.

      You count up and find that you have a dozen processes outside yourself that you might rely on. God knows that 4 of those merit trust and indicate A. God also knows that 8 of them are “on the wrong topic” or otherwise untrustworthy. What they indicate is a toss up, 4 for A, 4 for B. Lacking omniscience Hanson has to use his judgment.

      You use your judgment to pick one of these twelve processes. If it indicates A your subconscious undermines your confidence in its relevance. Realising that it is on the wrong topic, you switch to a different process. Rinse and repeat.

      Eventually you have an external process that tells you to believe B. Your subconscious has lied to you, getting you to judge that you have picked the most trustyworthy and relevant one of the twelve.

      When I sit at the Go board I don’t think to myself “Intuition is untrustworthy, I’ll spend my time in the opening reading out end game sequences and then make a crap move.” What is really going on is that my subconscious wants a timid defensive move and it knows that getting me to read out tactical sequences is a good way to get one. So it whispers in my ear about how untrustworthy intuition is. Use “minimal rationality”. Formal analysis is always trustworthy. Then I read out some sequences, under the banner of formal analysis, find a threat to a group and play a timid move to defend it. I play my timid move smugly. My subconscious has lied to me, telling me that my explicit thoughts were on the right topic.

  • Grant

    But I want, first and foremost, to believe the truth.

    I’m not trying to insult Robin or disagree with this post, but this seems like a lie to me. This is exactly the sort of thing we’d expect academics’ internal liars to make them believe: that they selflessly serve the public good, while in fact being biased towards their own theories and accomplishments. Why should we believe that Overcoming Bias is really about overcoming bias?

    Given the nature of evolution, expecting life (or its creations) to be truly selfless in the pursuit of truth seems impossible? The approach of creating institutions which incentivize honesty (prediction markets, etc) seems like a more realistic approach.

    • http://transhumangoodness.blogspot.com/ Roko

      Robin is, as an academic, incentivized to reliably get to the truth; his status depends upon it. This is especially true in the long-term.

      • Grant

        I certainly won’t disagree that some status-seeking in academia produces truth which is very valuable. But I don’t think all or even a majority of the status-seeking is also truth-seeking. The social sciences seem to have especially weak feedback mechanisms associating status with truth, even in the long term. For example, people are still hotly divided over Keynes’ GT, but not Einstein’s relativity. We could say this is because economics is much harder to test than physics, but testing theories is the feedback which keeps academics honest.

        I don’t know what Robin is incentivized to do.

  • Pingback: Linkpile

  • Taemojitsu

    There are an infinity of universes that can be created with lies.

    There is only one universe that can be created with truth; however, it is also impossible to know this universe in a social sense because it depends on people’s perceptions of value, and often people do not control how they think of things or even do not allow their value judgement to be known outside their self. So, again from a social standpoint, the truth only makes things simpler when people have themselves decided to be simple.

    This desire for simplicity, and for a reality that is not dependent on random perceptions and judgements as a result of heuristics that are often deliberately (but innocently) manipulated by individuals or organizations in their environment, is one reason to seek truth. Another reason is as a source of moral guidance, in that the overall best strategy in this world is cooperation and it is only a matter of revealing that strategy through attempts to obscure it.

    However, the problem with this approach is that when you learn to much you discover that how much deception pervades the world and it seems impossible to prosper, or even survive in this world when you consider that many possible choices are a conflict that can only result in either you suffering and someone else benefiting and the world as a whole becoming worse; or you benefiting and someone else suffering and whether or not the world may benefit, you have already done a selfish action and as a result cannot truthfully see yourself as contributing to the world.

    Are you truly being honest with yourself? You may not want to ask yourself who else may be losing out if you are not. Given the limitations of our memory and our ability to compare situations and predict the future, it seems inevitable that there will always be some dishonesty. Does it not?

  • Pingback: Overcoming Bias : The Best Big Lies?

  • Pingback: Extrapolating – Ajay Jetti