Errors, Lies, and Self-Deception
About a recent European Journal of Personality article:
The participants recorded a one minute television commercial, … then watched … themselves, having been given guidance on non-verbal cues that can reveal how extraverted or introverted a person is. … They were then asked to rate their own personality. … The participants’ extroversion scores on the implicit test showed no association with their subsequent explicit ratings of themselves, and there was no evidence either that they’d used their non-verbal behaviours (such as amount of eye contact with the camera) to inform their self-ratings.
In striking contrast, outside observers who watched the videos made ratings of the participants’ personalities that did correlate with those same participants’ implicit personality scores, and it was clear that it was the participants’ non-verbal behaviours that mediated this correlation … Two further experiments showed that this general pattern of findings held even when participants were given a financial incentive.
[Folks seem] extremely reluctant to revise their self-perceptions, even in the face of powerful objective evidence. … Participants seemed able to use the videos to inform their ratings of their “state” anxiety (their anxiety “in the moment”) even while leaving their scores for their “trait” anxiety unchanged.
(Hat tip to Michael Webster.) This sort of thing terrifies me. Let me explain why.
Any long complex design or calculation is subject to errors. And those who do such things regularly must get into the habit of testing and checking for such errors. This may take most of the effort, but it is at least manageable, because we expect that such errors are not very correlated with other features of interest. If something has worked ten times in a row in field tests, it will probably work the first time for a customer, at least if that customer’s environment is not too different from field test environments.
People who have to worry about spies and liars, on the other hand, have to worry more about troublesome correlations. Liars can coordinate their lies to tell a consistent story. Spies and liars can choose carefully to betray us exactly when such defections are the hardest to detect and the most expensive. So the fact that a possible spy performed reliably ten times in a row gives less confidence that he will also perform reliably the next time, if the next time is unusually important. In these cases we rely more on private info, i.e., what the spy or liar could not plausibly know. For example, if we do not let the possible spy know which are the important cases, he can’t choose only those cases to betray us. And if we can check on him at unexpected times, we might catch him in a lie.
We humans have many conscious beliefs, and we are built to have accurate ones in many situations, but in many other situations we are built to have misleading conscious beliefs, i.e., to be self-deceived. Evolution judged that such misleading beliefs would tend to help us fool our colleagues, and so better survive and reproduce. It created subconscious mental processes to manage this process of deciding when our beliefs should be accurate or misleading.
We seem almost completely defenseless against such manipulation. Yes we can try to check our conscious beliefs against outside standards, but our subconscious liars can not only choose carefully when to lie about what, but they probably also have access to all our conscious thoughts and info! They might even lie to us about whether we checked our beliefs, and what those checks found. So in principle our unconscious liars can execute extremely complex and subtle lying plans. For example, the study above suggests that such processes choose to make us blind to clues about our average public speaking anxiety, while letting us see momentary fluctuations about that average.
If our subconscious liars were as smart and thoughtful as our conscious minds, we would seem to be completely at their mercy. The situation may not be that bad, but it is not clear how we can tell just how bad the situation is; even if they had complete control, they would probably want us to think otherwise.
This is the context in which I find myself interested in “minimal rationality,” similar to minimal morality. In the limit of my being subject to very powerful subconscious liars, how can I best avoid their distortions? It seems I should then become especially distrustful of intuition, and especially interested in trustworthy processes outside myself, such as prediction markets and formal analysis.
If I have a choice between two ways to make an estimate, and one of them allows more discretion by subconscious mental processes, I should try to go with the other choice if possible. If the data is pretty clear and theory needs a lot of judgment calls to get an answer, I go with the data. If the data is messy and needs judgement calls while standard theory gives a pretty clear answer, I go with that theory.
Of course this minimal rationality approach makes me subject to my subconscious lying about which estimates allow more subconscious discression. So I need to be especially careful about those judgments. But what else can I do?
Many folks figure that if evolution planned for them to believe a lie, they might as well believe a lie; that probably helps them acheive their goals. But I want, first and foremost, to believe the truth.