The Need To Believe

When a man loves a woman, …. if she is bad, he can’t see it. She can do no wrong. Turn his back on his best friend, if he puts her down. (Lyrics to “When a Man Loves A Woman”)

Kristeva analyzes our “incredible need to believe”–the inexorable push toward faith that … lies at the heart of the psyche and the history of society. … Human beings are formed by their need to believe, beginning with our first attempts at speech and following through to our adolescent search for identity and meaning. (more)

This “to believe” … is that of Montaigne … when he writes, “For Christians, recounting something incredible is an occasion for belief”; or the “to believe” of Pascal: “The mind naturally believes and the will naturally loves; so that if lacking true objects, they must attach themselves to false ones.” (more)

We often shake our heads at the gullibility of others. We hear a preacher’s sermon, a politician’s speech, a salesperson’s pitch, or a flatter’s sweet talk, and we think:

Why do they fall for that? Can’t they see this advocate’s obvious vested interest, and transparent use of standard unfair rhetorical tricks? I must be be more perceptive, thoughtful, rational, and reality-based than they. Guess that justifies my disagreeing with them.

Problem is, like the classic man who loves a woman, we find hard to see flaws in what we love. That is, it is easier to see flaws when we aren’t attached. When we “buy” we more easily see the flaws in the products we reject, and when we “sell” we can often ignore criticisms by those who don’t buy.

Why? Because we have near and far reasons to like things. And while we might actually choose for near reasons, we want to believe that we choose for far reasons. We have a deep hunger to love some things, and to believe that we love them for the ideal reasons we most respect for loving things. This applies not only to other people, but to politicians, to writers, actors, ideas.

For the options we reject, however, we can see more easily the near reasons that might induce others to choose them. We can see pandering and flimsy excuses that wouldn’t stand up to scrutiny. We can see forced smiles, implausible flattery, slavishly following fashion, and unthinking confirmation bias. We can see politicians who hold ambiguous positions on purpose.

Because of all this, we are the most vulnerable to not seeing the construction of and the low motives behind the stuff we most love. This can be functional in that we can gain from seeming to honestly sincerely and deeply love some things. This can make others that we love or who love the same things feel more bonded to us. But it also means we mistake why we love things. For example, academics are usually less interesting or insightful when researching topics where they feel the strongest; they do better on topics of only moderate interest to them.

This also explains why sellers tend to ignore critiques of their products as not idealistic enough. They know that if they can just get good enough on base features, we’ll suddenly forget our idealism critiques. For example, a movie maker can ignore criticisms that her movie is trite, unrealistic, and without social commentary. She knows that if she can make the actors pretty enough, or the action engaging enough, we may love the movie enough to tell ourselves it is realistic, or has important social commentary. Similarly, most actors don’t really need to learn how to express deep or realistic emotions. They know that if they can make their skin smooth enough, or their figure tone enough, we may want to believe their smile is sincere and their feelings deep.

Same for us academics. We can ignore critiques of our research not having important implications. We know that if we can include impressive enough techniques, clever enough data, and describe it all with a pompous enough tone, our audiences may be impressed enough to tell themselves that our trivial extension of previous ideas are deep and original.

Beware your tendency to overlook flaws in things you love.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • davesmith001

    Is there not an oppsite bias as well? “The grass is greener…

  • http://patheos.com/blogs/hallq/ Chris Hallquist

    >For example, a movie maker can ignore criticisms that her movie is trite, unrealistic, and without social commentary. She knows that if she can make the actors pretty enough, or the action engaging enough, we may love the movie enough to tell ourselves it is realistic, or has important social commentary.

    This seems dubious. Plausibly humans have buttons you can press to make a movie seem realistic and proufound without actually being those things, but I don’t think “pretty actors and engagin action” are the buttons. Who the hell thought “Avengers” was realistic or had important social commentary?

    • Doug

      Yes, but Avengers wasn’t even trying to be a serious movie. Even Oscar winning movies contain far more attractive people, and much more action than everyday life. Even movies that are widely known as including an unattractive person, like Precious, mostly attractive supporting cast.

      If these aspects didn’t make movies seem more realistic, why would realistic filmmakers spend the additional resources to hire attractive people or add action?

      • IMASBA

        “If these aspects didn’t make movies seem more realistic, why would realistic filmmakers spend the additional resources to hire attractive people or add action?”

        Because those things draw viewers who otherwise would not have watched. Different people watch the same movie for different reasons and sometimes people who came for the explosions and babes can still be surprised to find themselves getting the “message” of the movie. Also, the international market shouldn’t be forgotten: a movie where every other person is obese would be realistic to an American audience but seem comical to a Japanese audience and therefore distract from the message.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      Who the hell thought “Avengers” was realistic or had important social commentary?

      I think you’re correct that construal-level theory predicts that an Avenger’s movie with ugly actors and without engaging action would not only be less entertaining but also be viewed as (even) less realistic and containing (even) less social commentary. But I think the prediction is accurate.

  • IMASBA

    “Similarly, most actors don’t really need to learn how to express deep or realistic emotions. They know that if they can make their skin smooth enough, or their figure tone enough, we may want to believe their smile is sincere and their feelings deep.”

    It’s easy to make that critique when all you know are American and British productions, but trust me, American and British actors on the screen are usually really good compared to actors in many other countries, they really do emotions better.

    “Why? Because we have near and far reasons to like things. And while we might actually choose for near reasons, we want to believe that we choose for far reasons.”

    There’s wisdom to be had in realizing that not everything has to be rational because rationally speaking life is pointless, we create a purpose for it through our beliefs and our loving of certain things or people. Instead of judging others for loving and believing different things, be happy for them that they found some meaning to their lives (even if they don’t realize, in fact that’s even better because ignorance really is bliss), except when they cause a serious threat to you or others of course. When utility is defined as happyness it is often more utilitarian to let people love or believe in whatever feels good to them.

    • B_For_Bandana

      Well, hang on. Don’t go into an “Actually, being irrational is more rational if you really think about it, because who is happier, right?” Flowers-For-Algernon death spiral on me just yet. Just having strong likes and dislikes is not the kind of bias Robin is talking about. He’s talking about liking or disliking something for one reason, and then believing that you like or dislike it for a reason that reflects better on you than the real one.

      In other words, it is completely and totally rational to enjoy A Scanner Darkly (a profound movie, let’s say, for the sake of the argument) solely on the basis that Keanu Reeves and Robert Downey Jr. (lead actors in that movie) are hot. It could even rational, though dishonest, to lie to your friends about your real motives, so that they will think you are cool and intelligent. What is not rational is to then be convinced by your own lies that you actually value movies more for their artistic merit than the hotness of their actors, and then miss out on watching Speed or Iron Man (featuring Mr. Reeves and Mr. Downey Jr. respectively) because those are just dumb action movies.

      What Robin wants is for us to stop lying to each other and ourselves so often. He doesn’t want (reaching for the easy example) religious people to give up religion, create a giant hole in their life, and get all depressed and nihilistic. He wants people who desire ritual and community to get together in a building that has a sign in front, in huge letters, “GET YOUR RITUAL AND COMMUNITY HERE, STEP RIGHT UP, NOW WITH GROUP SINGING AT NO EXTRA CHARGE (NO METAPHYSICAL DOCTRINE IMPLIED)”

      Very briefly, Robin’s thesis is that irrationality is not actually about avoiding pure existential angst, it’s about more mundane stuff like ego and social status. This suggests that if we could change the way we construct social status in just the right way, we could make people less susceptible to irrationality.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        What is not rational is to then be convinced by your own lies that you actually value movies more for their artistic merit than the hotness of their actors, and then miss out on watching Speed or Iron Man (featuring Mr. Reeves and Mr. Downey Jr. respectively) because those are just dumb action movies.

        This might be quite “rational” if you get more satisfaction from believing you are guided by artistic merit than from watching dumb action movies.

        Instrumental rationality is a very bad ideal for intellectuals: it often conflicts with truth seeking. (And that’s the problem.)

      • anon

        No, that isn’t Robin’s thesis. That’s the Philosophy of LessWrong or Michael Vassar, or something.

        Robin offers advice to readers who are concerned only with the truth. His posts are usually, “if you care only about the truth, if you’re optimizing for the accuracy in your world model and nothing else, make sure you’re mindful of such and such common errors [and here's some evidence for their existence and/or a theory for why they exist]“. He occasionally implies that one has a moral imperative to subordinate one’s emotions to Truth, but he never explicitly say so, and I don’t think he believes it is.

      • http://entitledtoanopinion.wordpress.com TGGP

        Robin is explicit that truth should be your first goal here.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        I don’t interpret him that way. (Weak clue: Robin upvoted a conflicting account.) Robin provides a conditional recommendation: if you’re concerned about truth, you should recognize the existence of bias and seek to overcome it.

        This says nothing about how much truth we should seek. It’s palpably silly to have a cause and be indifferent to its truth, but an equally warranted conclusion is to avoid having causes!

        In general, Robin seems much more concerned about truth in near-mode than far-mode matters. In the latter, I would go so far as to say he is cavalier about truth. Thus, approvingly:

        How can you be so sure of your intellectual standards and your preferred interpretations of our words, so as to put at risk all this useful religious practice?

        [Yudkowsky is the opposite: serious about far-mode truth and cavalier about the near-mode variety. In my classification scheme, this makes Robin Monomaniacalist and Yudkowsky Demagogist. See "Utopianism, Demagogism, and Managerialism are left, right, and center: Patterns of opportunism and rigidity" http://tinyurl.com/7xrb9u2 ]

      • http://entitledtoanopinion.wordpress.com TGGP

        Rewatching the video (which is not available through my earlier link, but can be found from here), I see it concludes with him saying “If you really cared about truth, you would make it one of your causes, maybe your top cause”. So I think you’re right.

  • efalken

    I think most intellectual biases come from a ‘far’ prejudice: the desire to appear technically state-of-the-art, to support a big theory (eg, Keynesian, classical). The near bias is more instinctual, driven by love, sex, status, the far bias more acquired, but both are bias.

    Yet, following Antonio Damasio’s work, if we had no biases, we wouldn’t be very efficient either. People who have suffered brain injuries which prevent them from perceiving their own feelings spend hours deliberating over irrelevant details, such as where to eat lunch. The human mind, collectively, converges on the truth better when individuals are biased.

    • STINKY

      “The human mind, collectively, converges on the truth better when individuals are biased.”

      Could you cite an example? I’m not doubting this claim, I just wonder what it looks like in practice.

      • efalken

        Google Antonio Damasio, and he riffs on that research mentioned. If individuals have no emotions, they dither. Clearly there’s ‘moderation in all things’ as an optimum, but zero emotion, leading to zero bias and detached rationality, is not optimal.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        How can you say that spending hours dithering over where to eat lunch represents absence of bias? I think you’re equating emotion with bias, which is itself the bias Damasio sought to correct.

      • efalken

        Isn’t emotion simply an added weight/preference on some objective that is not amenable to objective calculus?

      • IMASBA

        I’d say so. Emotions may be a solution to navigation and attention problems in AI. Keeping tabs on everything in your surroundings and calculating every possible choice is so computationally expensive that it’s better to just go with a semi-random impulse most of the time.

      • Cambias

        This suggests that some of the more apocalyptic fears of superintelligent AIs may be unwarranted: sure, a supermind might be able to wipe out humanity as annoying pests — but if it’s incapable of feeling annoyance, then why bother?

      • B_For_Bandana

        Worries about UFAI do not depend at all on the AI not liking us. The example everyone uses is a super-intelligent AI made to build paperclips; it is so good at this that it rapidly turns all matter in the solar system, including us, into paperclips. It wasn’t annoyed, didn’t even know we existed, yet we are dead. What the example implies is that any powerful enough optimization process, whether it has recognizable emotions or not, is horribly dangerous to us no matter what it is optimized for, unless it is specifically optimized to be nice to us. How to implement this in the form of computer code is the Friendly AI problem, and it is currently considered open.

      • IMASBA

        That’s not the example “everyone uses”. A hyper-specialized AI like that can easily be outsmarted and destroyed. Nanobots are a different matter but those are not intelligent.

      • B_For_Bandana

        Yes, “powerful enough” (e.g. to create nanobots) is the big weasel phrase there.

      • IMASBA

        “Anyway, the point here is that an indifferent powerful entity is almost as bad as a malicious powerful one, since the former could wipe us out without noticing or caring, in pursuit of goals orthogonal to what we care about.”

        Right, I agree that that is an important point that people have to keep in mind, AIs indeed don’t have to have anything against humans per se to still wipe them out, just like humans don’t have anything against corals per se but we are wiping them out almost without realizing it.

      • IMASBA

        @efalken:disqus

        The brain CAN randomize, just not consciously. Where else would emotions and creativity come from?

        @Cambias

        Well, a “superintelligent” AI might very well have emotions or functions behaving like emotions because those help it make decisions. Any AI definitely needs to have “passions” to be active, the more intelligent the AI becomes the more it will be able to apply abstract “passions” to situations it wasn’t designed to apply them to. The results are unpredictable.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        >The brain CAN randomize, just not consciously. Where else would emotions and creativity come from?

        I’ve seen empirical support for the first sentence. But randomization isn’t the basic source of emotion, which is (to put it simplistically) the product of information processing by the prefrontal cortex (right lobe for negative emotions, left lobe for positive emotions).

      • IMASBA

        Emotions are probably created using some (pseudo) random processes, as is creativity. There really is no other way of doing them even remotely efficient. Of course the random component can be “hardware” (or wetware if you will) based with no actual electrical signals containing random numbers, but for example random growth of synapse strength between brain cells, or maybe random signals are produced by random fluctuations in K and Na flows and concentrations.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Gerald Edelman showed that there is random growth in the development of synapses, a process of growth and pruning, particularly early in development but persisting throughout the lifespan. But there’s no evidence that randomness is involved in the experience of a particular emotion at a particular time or that randomness (even in development) has anything specific to do with emotion (as opposed to other mental processes).

        Emotions are too important to survival to be left to chance. They are the result of rapid processing, it is true, and for that reason there’s already built in a merely statistical relation to adaptive demands.

        We randomize behavior by choice in competitive situations to make our behavior unpredictable to opponents.

        Emotions are actually more obviously subject to psychological determinism than most other forms of cognition.

        (If emotion seems random to you, it’s probably because you’ve never been psychoanalyzed.)

      • IMASBA

        “Emotions are too important to survival to be left to chance.

        If emotion seems random to you, it’s probably because you’ve never been psychoanalyzed.”

        Psychoanalysis allows too much room for fraud (there are too many tricks that make it easy to fit a BS explanation on everything). When I talk about random processes in the mind I don’t mean the results are compeltely unpredictable. I’m talking about processes that use some form of randomness, equivalent to something like metreopolis sampling on computers. Results are only a tiny bit random, they fluctuate only a bit, but the process did use random elements to dramatically speed up calculation time vs. an exhaustive exact process.

      • Cambias

        Oh, undoubtedly: but those emotion-like impulses would have to be designed in by someone.

        Unless . . . it occurs to me that if AI develops accidentally, the AI which would stand the best chance of surviving would be the one with a “survival instinct” or whatever you wish to call it. So there would be a selective pressure, just like in biological evolution, for AIs to have a sense of self-preservation. One hopes that would include cooperation with other intelligences like us.

      • efalken

        Interestingly, from a logical standpoint many indecisions can be rectified via a randomizing strategy, that is, put a penalty on delay, and if the cost of the difference between to choices is less than that cost, make a statistical decision. For example, a tennis play who needs to serve down the middle or to the outside with 60-40 probabilities, can key off whether the second hand is between 0 and 36 to serve one way, etc. Given two equal choices, one can always flip a coin (or pseudocoin). But, randomizing isn’t a human default, it’s hard to do (really, impossible for a human without some sort of technology).

        I think we learn by putting emotion into our uncertain preferences and noting the results, whereas if we acted like programs and randomized in these situations, we would spend too much time evaluating trivia, or too little time evaluating our broader preferences. Emotion is proportional to an inarticulable sense of importance. Now, we could weight that too and do the math, but our neocortex isn’t aware of all that information, just the emotion with some vague sense of its origin.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Added to what? Are not all preferences (completely) emotional? (Per Hume.)

        But, no, I don’t agree with your formula. Emotions don’t merely drive behavior (arationally, you might say). They are also carriers of information “not amenable to objective calculus.”

        For example, you finish a seemingly congenial conversation with someone, but find yourself experiencing a humiliated rage. Although you’re not consciously aware of the injury to status you suffered by some very subtle dig, the emotion informs you of its existence. You can then apply objective analysis (sometimes) to figure out what it was (and whether vengeance is warranted). But you would never have been aware of it had you not experienced the (apparently) unjustified emotion.

        This process sometimes does misfire because our reactions are often partly neurotic. Still it’s real information–and an intrinsic input to rational functioning.

  • arch1

    I just skimmed parts of the Martin Gardner autobiography. He relates that in response to one of his debunking books (it may have been his 1952 Fads & Fallacies in the Name of Science, or it may have been a more recent one), he received much feedback of the form ‘great book, love how you nailed all that pseudoscience, but I can’t understand why you included topic x in your book’ – for various topics x, roughly uniformly distributed across all of his topics.

  • ES

    These things (normally) don’t happen if we are able to look at things, honestly. It takes sometime, but everyone will eventually be there.

    • IMASBA

      Nope, we will never all get there and that’s a good thing. Like efalken described below: a person without passions and emotions that inevitably make him/her biased just doesn’t do anything because a) it’s too difficult to weigh all possibilities rationally, even when choosing restaurants (which is why I believe we would have to give AIs some impulsive inputs that resemble emotions) and b) rationally there is no reason to ever get out of bed, sure you might starve but you will not necessarily see that as a bad thing if you have no passions and emotions (fear of death is an emotion, the will to live a passion).

  • Pingback: Recomendaciones | intelib

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    The conflict between near and far cognition is an example of cognitive dissonance. (See Uncomfortable ideas and disfluent expression affect us similarlyhttp://tinyurl.com/8m65wry ) Cognitive-dissonance theory predicts that we are motivated to think of our near-mode choices as consonant with our far-mode ideals; but the reverse is also true. We also modify our far-mode ideals to correspond more with our near-mode choices. Loving a dishonest person, who will motivate you both to see the person as more honest than she is but also to think honesty is less important than you thought before.

  • Pingback: The Reason Your Good Ideas Are Mercilessly Ignored | DARELEAD

  • Pingback: The Reason Your Good Ideas Are Mercilessly Ignored | DareLead