If Self-Fulfilling Optimism is Wrong, I Don’t Wanna be Right

Often, I hear claims like the following: "too many people are cynical about electoral politics."  It’s hard to know just what to make of that sort of assertion.  For cynicism is most likely true about electoral politics, and, moreover, as a good little Bayesian, I should count the cynicism of just about everyone else as evidence to strengthen that belief. 

"But!," the anticynic might say, "cynicism is a self-fulfilling prophecy!  If we all believe that politics is run by crooks, we won’t demand better at the voting booth [for example, because we vote strategically for the least offensive guy we think can win rather than the one we trust]!  If enough people are optimistic, your optimism will be self-fulfilling too!" 

So imagine the following belief/payoff correspondences.  If you hold a true cynical belief, you get payoff A.  If you hold a false cynical belief (cynicism in a nice world), you get payoff B.  If you hold a true optimistic belief, you get payoff C, and if you hold a false optimistic belief, you get payoff D.  Suppose C>A>B>D (or C>A>D>B — it doesn’t matter.)  And suppose that the world is nice if M people are optimistic (where N is the number of people in the world, and N>M>1) and nasty otherwise.

Anyone who knows game theory will immediately see that this world amounts to a coordination game with two nash equilibria: everyone optimistic in a nice world and everyone cynical in a nasty world.  And the nice world equilibrium has higher payoffs for all.

Now suppose we’re in a nasty world.  How do we get to the nice world?  It seems like we’d do best if someone came along and deceived at least M people into thinking we’re in the nice world already! 

This shows us that not only can individually rational behavior be collectively suboptimal, so can individually rational (truth-maximizing) belief.  Should we support demagoguery? 

I imagine the self-fulfilling false belief problem works on some individual cases too.  For example, suppose I have more success in dating if I’m confident?  Suppose I’m a person who has poor success in dating.  True beliefs for me are not confident ones, but I’ll do better if I adopt falsely confident beliefs, which will then be retroactively justified by the facts.  Should I engage in self-deception? 

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • alex

    What’s wrong with lying if it gets you ahead in life? 😉

  • http://hanson.gmu.edu Robin Hanson

    In the usual construction of games, you are allowed to choose any set of actions and assign any payoffs you like to those actions – but you can’t just apply the word “belief” to anything. To make your example persuasive, you need to show why certain beliefs would lead to these outcomes, rather than certain actions.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    If you’re allowed to set up a game in which beliefs are rewarded directly, apart from actions, then of course you can set up a game that rewards irrationality.

    E.g., this payoff matrix:

    Believe sky is neon green: $100 payoff
    Believe sky is blue: Boot to the head

  • Nominull

    If self-fulfilling optimism is wrong, I still want to be right.

    Ooops?

  • Caledonian

    If you’re not willing to give the matter much thought, you can always just delude yourself. This seems to be the strategy most people use.

    Or, you can recognize that if your belief determines what the truth is, the truth is subject to being altered by your actions and therefore you can act to change it. For example, the point would not be to convince yourself that you are confident, but to recognize the truth that you have the potential to be confident and the best way to manifest this potential is to attempt to become so. We practice skills and abilities not because we possess mastery, but because we possess the potential for improvement and attainment.

    Oddly, the two strategies appear nearly identical to an external observer despite being utterly incompatible.

  • http://www.aleax.it Alex Martelli

    These issues (including in particular the hypothesis that a certain level of overconfidence in suitable contexts is adaptive) have been hotly debated and contested in the psychology community since Alloy and Abramson’s seminal 1979 article; see http://en.wikipedia.org/wiki/Depressive_realism for a good short summary and essential biblio.

  • George Weinberg

    But this “anticynical” argument is an obvious crock. It’s absurd to claim there ever has been a candidate who was universally acknowledged to be the good choice and who lost purely as a result of self-fulfilling cynicism.

  • tcpkac

    “Believe sky is neon green: $100 payoff
    Believe sky is blue: Boot to the head”
    I think we can assume an interaction between the belief and the environment before getting the payoff, so the above scenarios would not be valid examples.

  • Vilhelm S

    Eliezer:

    But it is still interesting to see an example of a real-life scenario where the outcomes rewards false belief in that way. After all, hyperintelligent aliens with a penchant for tricky boxes are not very common in daily life, so it still makes sense to aim towards keeping your beliefs about the world correct, in the expectation that that will eventually do you some good. Whereas, if scenarios like the one Paul sketched are common, we might want to modify that goal (c.f. your slogan “rationality should win”).

  • Paul Gowder

    Robin & Caledonian, yes, but suppose you don’t know the payoffs. We’re leaving ordinary game theory land right now, but the intuition, I think, remains. I might be cynical, and cynicism might be correct, but *if* I knew the payoffs, and if enough people went along with me, I might prefer to act as if I were optimistic. (Indeed, I think one way to interpret cynicism about the political system is as not knowing the “fact” [if fact it be] that if everyone were optimistic, the world would be better.) Since I don’t know the payoffs, I don’t know to change my action. I need to be deceived.

  • Paul Gowder

    Actually, I can do better than this, without mucking around with ignorance of payoffs and so forth. Think of it in terms of beliefs about what other players are doing.

    Here’s a toy game:

    Imagine there are two candidates, the good and the evil one. (In order to be nonpartisan and yet somewhat aligned with likely reader preferences, we’ll call the good one Ron Kucinich, and the evil one Hillary Huckabee.) Assume there’s epsilon cost to voting, expected utility (I might be able to just use expected value here, but why not go all the way) for HH with probability 1 is 0, and expected utility for RK with probability 1 is >epsilon. And assume you form a belief about how many people are voting for RK vs HH depending on whether you are cynical or optimistic. Make it binary for simplicity. Let M be the number of people necessary for RK to win, and if RK doesn’t win, HH wins. HH wins if nobody votes.

    If you’re optimistic, you believe that at least M-1 people are planning to vote for RK. If you’re cynical, you believe that less than M-1 people are planning to vote for RK.

    Suppose that at time t, everyone is cynical, so everyone believes that nobody else is going to vote for RK. Expected utility for going to vote = -epsilon. Nobody votes. Now suppose an Evil Demon convinces at least M people to be optimistic. Those M people think at least M-1 people are — irrationally — going to vote for RK. (But they don’t know how many people have that belief, i.e. their have some positive probability of being the decisive voter.) Their expected utility for voting for RK becomes positive. The only thing that has changed is their belief about what other people are planning to do.

    Yay for the evil demon!

  • Paul Gowder

    Oh. For “their” read they, and set u(RK)>M(epsilon).

  • eisegetes

    Paul: voting is a terrible example of this phenomenon, since individual votes virtually never effect outcomes. Thus, there is very little reason to prefer any type of attitude in relation to voting (other than those that make you feel good) for the simple reason that your vote will have no practical effect.

    That doesn’t kill your thesis, though. Here’s a better example: Foolish optimism about medical outcomes might well, through the placebo effect and other poorly understood relations between mental state and biophysical response to disease, actually improve your prospects of recovering from a dangerous illness. Someone who looks at the situation in an epistemically proper way might well decide that, since they will probably die, there is little cause for hope. Whereas a person who is foolishly optimistic might be able to improve their chances of recovery. We could call this an example of where a practice that is generally utility maximizing (that is, endeavoring to hold only justifiable true beliefs) is locally more likely to be harmful than helpful.

    Then again, maybe the effect is smaller than we often suppose; it might be outweighed by the likelihood that foolish optimism will cause people to undergo painful treatments and disappointment, rather than seeking a peaceful and comfortable death.

    You’ll have to work harder to make a case that holding false-but-optimistic beliefs about politics will ever be likely to have a positive effect.

  • eisegetes

    And assume you form a belief about how many people are voting for RK vs HH depending on whether you are cynical or optimistic.

    But this is just absurd, right? Who would form such a belief based solely on their cynicism or optimism? Normal people would rely on facts to at least some degree.

  • http://hanson.gmu.edu Robin Hanson

    Paul, you are still not taking seriously the idea that a “belief” is more than just some parameter you can assume anything you like about.

  • Paul Gowder

    Robin, I think I’m fairly well in line with the treatments of belief given by many others. I’ve often heard the governmental solution to coordination games being expressed in terms of changing beliefs about what other people will do, for example. (As in: the reason the law “drive on the right” works is because it induces in me a belief that others will drive on the right, and my best reply is to do what everyone else is doing.)

    The crux is that the belief does lead to the outcome, because the belief is about what the other players are doing, and my best reply given belief A about what other players are doing is different than my best reply given belief B about what other players are doing.

    Another example: Avner Greif’s recent book, Institutions and the Path to the Modern Economy: Lessons from Medieval Trade — which is just utterly brilliant, by the way, I highly recommend it to everyone — analyzes institutions largely in terms of changing people’s beliefs about the strategies of other players, and treats self-reinforcing (a term he invents, but a very useful one) institutions as those that support the maintenance of those beliefs in future rounds.

    I see my example as just another case of that approach, and in fact it can be rewritten as the paradigm example of one of them. If everyone believes everyone else will drive on the right, everyone will drive on the right, even if the payoff for everyone driving on the left would (somehow) be higher. If Descartes’s Demon gets in everyone’s head and changes that belief to the belief “everyone will drive on the left,” behavior will follow in due course. And thus, even though the belief was false when the Demon inserted it into our brains, our holding it (plus our responsiveness to incentives) brought it about that it was true subsequent to our holding it. Truth followed belief, rather than the other way around.

  • Caledonian

    Paul, it’s interesting that the model you propose works only as long as there aren’t too many empiricists in the population. If enough people try to verify how others will act through observation, the whole system comes crashing down.

    Most revolutions can probably be seen as a failure of a self-perpetuating belief system – if everyone thinks everyone else will act to maintain the system, even if only out of fear of punishment or retribution, it will always be too risky for anyone to rebel.

    • themusicgod1

      Wouldn’t it be OK if there were empiricists around in the population, as long as they were distracted by *other issues*? Granted: the more of them there are, the more likely that the particular issue in question could get their attention, but it seems like either

      a) “empiricist” needs to be ” in relation to this topic”

      or

      b) empiricists need the motive/attention span, in general, to attack the basis of reality in question.

      ?

      I’m picturing different fields of concern surrounded by Hope trying to pawn off empiricists on eachother like a game of hot potato.

      In the case of revolutions: that a particular Hope-field wins this game.

  • eisegetes

    Okay, Paul: induce in yourself a belief that the sky is always, and has always been, green. And then get back to us.

    Doxastic voluntarism has always been a very implausible epistemological position. Have you ever just voluntarily changed one of your beliefs without the input of any new information regarding that belief? It would be a very strange thing to do.

    Note that none of your examples is a case of belief-by-will; in the driving case, the law causes the belief, because the law is a relevant fact that provides important evidence of the belief (law is to drive on the right, most people follow the law, ergo must people will drive on the right, so it is safer for me to do so as well). Evil demons don’t really count either; there is it not our choice to believe, but rather, the result of an external intervention.

    Here’s another thing: my holding a belief has no causal effect on everybody else’s likelihood of holding that belief, unless I communicate that new belief to them. But that communication would be a separate step in a game. Why couldn’t I just maintain the belief that is more likely to be true, lie to the other game players, and wait to see whether they have adopted the belief that I currently pretend to hold?

    In other words, your solution to the game involves an extraordinary amount of coordination by the players — vis, they can all choose to hold a (currently false) belief, and will cooperate to do so in order to achieve a desired outcome. Well, if they are so good at cooperating, why couldn’t they just cooperate on the basis of actions — i.e., voting in your prefered way, driving on the left, or whatever — and keep their beliefs to what is justified until there is actual evidence that supports changing them? Why is the one act of coordination more likely than the other?

  • Paul Gowder

    Eisegetes, did I ever say that we could get from these sub-optimal, but correct, beliefs, to the optimizing, but false (until they become true), beliefs by will? Nope. (The dating example was the closest case, but in all the others I added an external agent of belief change for a reason.)

  • http://omniorthogonal.blogspot.com mtraven

    Funny nobody has mentioned Obama by name, although he seems to be lurking in the background of this post. His entire appeal and campaign seems based on the idea that he can act as a coordination point for shifting people and the country as a whole from cynicism to idealism. Yes we can! This is what makes him both appealing (people would rather be idealistic than cynical, and are drawn to someone who can help them shift) and creepy (people are rightly suspicious of the arational manipulation inherent in this dynamic). A cult is no longer a cult once a majority is won over.

  • Acheman

    Actually, I seem to remember we have a real example of this phenomenon in the UK, or at least used to have. The third party, the Liberal Democrats, has a pretty lowish share of the seats in parliament. But I recall it used to be the case that there was a quite considerable proportion of the population who would say in opinion polls that they’d vote for the Liberal Democrats if they thought it would make any difference, but because they were such no-hopers they were going to vote Labour or Conservative. It’s one of the reasons the Liberal Democrat party has been calling for proportional representation so consistently and for such a long time – under PR systems, votes are much less likely to be ‘wasted’ and so the Lib Dems would almost certainly end up with a far higher share of the seats in parliament, and as a result of this a higher share of the vote (lovely backwards-causation there, eh?) than they have presently – if not a majority, probably enough to spoil one of the other parties’ majorities and force them into a coalition. If Lib Dem supporters had been able to create a little self-fulfilling optimism among themselves they could probably have created this effect in several previous elections.

  • Jadagul

    I think I agree, or at least mostly agree, with Paul Gowder. For another true but kind-of-silly example, I tend to be fantastically optimistic: I have this sort of deep-seated belief that ultimately, everything will work out and nothing really will go wrong. Which is completely arational; there’s no reason my life should work out nicely. But it means I’m usually way less stressed and nervous than most of my friends, because they’re worried about all the things that could go wrong and I’m not. So this belief actually improves my ability to get stuff done.

    Incidentally, it may be worth noting that the only part of my life that I don’t think will just work out for the best is my love life; this is also the only part of my life that doesn’t seem to generally work out pretty well. Of course, I’m sure a decent chunk of that is confirmation bias.