Gullibility and Control

Science Friday highlighted research suggesting we jump more to conclusions when we feel out of control:

In situations in which a person is not in control, they’re more likely to spot patterns where none exist, see illusions, and believe in conspiracy theories. In a series of experiments, researchers created situations in which people had less control over their situation, and then tested how likely the participants were to see imaginary images embedded in snowy pictures. The researchers also had participants write about either a situation in which they had control, or a situation in which they didn’t, and then presented stories involving strange coincidences. People who had written about a situation in which they were not in control were more likely to draw non-existent connections between the coincidences, the researchers found.

This summary suggests out-of-control-feeling folks are biased to see more than there is, but perhaps in-control-feeling folks are biased to see less than there is.   

GD Star Rating
Tagged as:
Trackback URL:
  • Stuart Armstrong

    I’ve observed that myself (in me and in others); there just remains the thorny question of why.

    Is it that in-control people have much to gain by sticking to reasonable methods that have always worked for them, but out-of-control people hold out for the chance of spotting a completely new pattern, and a new way of doing things? Or is it just the cognitive load of being out of control that means they don’t have the time to analyse more properly?

  • Court

    Forward this to all the doomers you know.

    Robin, is the original study up anywhere?

  • gwern

    Stuart: Don’t gamblers who are behind start taking irrationally large and risky bets? I feel like there is an analogy (or perhaps the same mechanism even) to be drawn between this and that.

  • William B

    I have a serious question:

    Studies like this, which seek to create a state of mind by asking subjects to remember some experience they had, always seem silly to me. How can I convince myself that this extremely common technique is actually valid? Alternatively, is there any support for my position other than my gut feeling that you can’t use an artificial recreation of a genuine feeling as evidence?

    Does this bother anyone else?

  • Benja Fallenstein

    I agree with Stuart that “why” is the thorny question. To Stuart’s suggestions, I’d like to add that it seems like out-of-control folks have a need for appearing to be in control, which may be helped by spotting and announcing apparent patterns.

  • Robin Hanson

    So why does everyone assume the out-of-control style is the less rational one?

  • Benja Fallenstein

    Robin, speaking for myself, because when I see patterns in out-of-control style situations, it seems to me I more often than not conclude later that they weren’t really there. The appearance may be deceiving, but it seems like enough for a preliminary conclusion…

    Have you had the opposite experience (“out-of-control” hunches turning out to be better than “in-control” ones)?

  • michael webster

    There is an interesting suggestion that in truly one shot games or decisions, magical thinking is not irrational if it preserves the illusion of controlling, an otherwise uncontrollable and dangerous, situation.

    The suggestion was made in an attempt to explain the prevalence of water witching.

    More can be read here:

  • James Andrix

    So why does everyone assume the out-of-control style is the less rational one?

    First, you did title it ‘Guilibility and Control” As opposed to “Obliviousness to Subtle Patterns and Control”.

    Second, failing to see an image that is there is a different kind of failure than seeing an image in random data.
    If you don’t see something, you’ll keep on going doing what you were doing, presumably as rational as humans ever are with imperfect knowledge. If you see something in random data, it might guide you to make some random decision. Ignorant is different than wrong.

    But it’s also a matter of confidence, which I don’t think this study covered. I see Saturn in the left image and it seems clear to me that it was intentionally drawn. I intuitively sense that the odds of an image that saturn-like coming from random data are very small. (I’ll ignore the possibility that it was randomly generated, but selected for by appearance.)
    If someone is in a state of mind that they will see something in almost any random image, but they don’t know it, and it seems clear to them that the car they see in an image was intentionally drawn, and they think that the odds of something that car-like are very small, then they are being irrational.
    But if they just say they think it looks like a car, they are not being irrational.

    Likewise if someone is in a state of mind that makes them happen to not see saturn, and they are sure that image does not have a pattern or was not intentionally drawn, then they are being irrational.

    The primate who assumes there is no tiger because he does not see one, fails. As does the primate who sees tigers everywhere a tiger might be.

    The more rational primates might miss hidden tigers or startle at shadows depending on their state of mind, but they are careful in the former case and get a hold of themselves in the latter.

  • Zubon

    Robin, was the title meant to be “Gullibility and Control”? Perhaps you meant a clever turn on “guile.” I would suggest that your framing has encouraged people to see the out-of-control view as less rational, but the first sentence does the job better than a suggestive title: “they’re more likely to spot patterns where none exist, see illusions, and believe in conspiracy theories.” A state that leads to jumping at phantasms is a less rational state.

    I see examples of this frequently in online games, where players are at the mercy of developers and a potentially buggy code-base. Players are convinced that certain rituals improve your odds, that something must have been changed because their luck is poor today, etc. A City of Heroes developer used the signature, “Accuracy has not been nerfed,” so that it appeared on every post he made. The question still came up. Some people have trouble with the idea that, given 10 million World of Warcraft players, one in a million events will happen all the time. Of course, it does not help that occasionally conspiracy theories are right once the code is examined, so some Asheron’s Call players really were monster magnets and A Tale in the Desert wine flavors really were based on how many grapes had been crushed in a barrel.

  • Russell Wallace

    When I’m playing poker for chips, if I’m losing, I’ll start making wild bets. This is rational even though those bets will probably still lose, because it improves my overall chances of winning, and I only care whether I win or lose, not how much I lose by.

    If playing for cash, I’d refrain from throwing good cash after bad.

    Is one of these good analogy? If so, which one is the better analogy?

  • tim

    You can observe the phenomenon at work in instances of extreme shock. For example, take the attacks of 9/11, which spawned thousands of implausible or even absurd theories. The Bush administration used the public sense of vulnerability and lack of control to propose a link between 9/11 and Saddam Hussein. Conspiracy theorists, on the other hand, link the attacks to the government or Israel or the Illuminati. For terrified, nervous, suspicious people, seeing connections where there are none lets them regain some security. With Al Qaeda as the enemy, the public lacked control over their self defence; the foe was intangible and shadowy. Believing Saddam was behind the attacks meant it was possible to wage a conventional war against terror, with boots on the ground and bombs in the air, a reassuring notion that was easy for most of the American people to accept. The “truthers'” connections serve the same purpose; they can’t fight Al Qaeda (so they say Al Qaeda doesn’t exist), but they believe they can bring down the government if they reach enough people.

    I think it really comes down to the fact that more data about the situation usually equals more control over the situation. If you don’t have data, and don’t have any way of procuring data, you’ll invent it. It’s no different than people pretending to be knowledgeable about a topic under discussion rather than admitting they’re unfamiliar and bow out. In my experience, people are likelier to pretend they’ve seen a certain movie or listened to a certain musician than admit ignorance in a situation where lacking data might render them vulnerable (to criticism, mockery, &c). Telling yourself, “I don’t know anything about Al Qaeda or if they can/will attack again” is a much more difficult admission than “The enemy is Saddam Hussein, and we’re going to invade Iraq and put a stop to this.”

  • Alan

    One of the study’s author’s remarked to the effect that an absence of control gives rise to a “visceral” need for order, real or imagined. Any time the term “visceral” is evoked, we are likely talking about the operation or more primitive brain structures. If the seat of rationality is the prefrontal cortex, then visceral reactions are by definition, not rational. That does not mean that visceral reactions are not adaptive. Of course they are adaptive. We know that most brain activity takes place outside conscious awareness. In the face of uncertainty, perhaps the brain is fishing for a recognizable causal or agency pattern by which to order consciousness.

  • Pierre-André Noël

    So why does everyone assume the out-of-control style is the less rational one?

    There are a lot of examples in science where in-control people fail to see (or to consider further) patterns that do not agree with what they expected to see. It takes a crisis, the knowledge that you are not in control, to start considering that energy might not exactly be conserved or that space-time might not be euclidean.

    Is it less rational to assume an error of measure instead of rejecting every previously established laws of physics? No. However, when the same “error” is reproducible and cannot be explained in the current state of knowledge, you should start to feel less in control…

  • denis bider

    When you are in control, you want a working model of reality that helps you keep control and direct it properly. So you focus on thoughts that seem like they would gradually improve your model, and avoid thoughts that would require you to doubt your model or throw it away. This works well when your model is fundamentally correct, but results in failure when it isn’t.

    When you are not in control, you don’t care about any particular model, because you don’t need one, as you aren’t in control to make decisions. You are therefore free to explore all possibilities. In addition, random exploration could lead to a different model that is fundamentally better than the one currently dominant; if you discover such a better model, it could help you gain control.

    It is in the interest of people in control to defend their model of reality, as without it they don’t have control. Meanwhile, it is in the interest of people who lack control to seek a fundamentally better model than the current best, which requires a more widely cast net of observation.

  • Mikko

    Perhaps we jump to conclusion on both occasions, but rationalize better when we feel we are in control. We would use the extra time to weave a solid narrative around the conclusion, that can then be used to convince others. We deceive ourself about this process to better deceive others.

  • stylized.fact

    more, and consistent, findings on power and executive control: