23 Comments

Wouldn't it be OK if there were empiricists around in the population, as long as they were distracted by *other issues*? Granted: the more of them there are, the more likely that the particular issue in question could get their attention, but it seems like either

a) "empiricist" needs to be " in relation to this topic"

or

b) empiricists need the motive/attention span, in general, to attack the basis of reality in question.

?

I'm picturing different fields of concern surrounded by Hope trying to pawn off empiricists on eachother like a game of hot potato.

In the case of revolutions: that a particular Hope-field wins this game.

Expand full comment

I think I agree, or at least mostly agree, with Paul Gowder. For another true but kind-of-silly example, I tend to be fantastically optimistic: I have this sort of deep-seated belief that ultimately, everything will work out and nothing really will go wrong. Which is completely arational; there's no reason my life should work out nicely. But it means I'm usually way less stressed and nervous than most of my friends, because they're worried about all the things that could go wrong and I'm not. So this belief actually improves my ability to get stuff done.

Incidentally, it may be worth noting that the only part of my life that I don't think will just work out for the best is my love life; this is also the only part of my life that doesn't seem to generally work out pretty well. Of course, I'm sure a decent chunk of that is confirmation bias.

Expand full comment

Actually, I seem to remember we have a real example of this phenomenon in the UK, or at least used to have. The third party, the Liberal Democrats, has a pretty lowish share of the seats in parliament. But I recall it used to be the case that there was a quite considerable proportion of the population who would say in opinion polls that they'd vote for the Liberal Democrats if they thought it would make any difference, but because they were such no-hopers they were going to vote Labour or Conservative. It's one of the reasons the Liberal Democrat party has been calling for proportional representation so consistently and for such a long time - under PR systems, votes are much less likely to be 'wasted' and so the Lib Dems would almost certainly end up with a far higher share of the seats in parliament, and as a result of this a higher share of the vote (lovely backwards-causation there, eh?) than they have presently - if not a majority, probably enough to spoil one of the other parties' majorities and force them into a coalition. If Lib Dem supporters had been able to create a little self-fulfilling optimism among themselves they could probably have created this effect in several previous elections.

Expand full comment

Funny nobody has mentioned Obama by name, although he seems to be lurking in the background of this post. His entire appeal and campaign seems based on the idea that he can act as a coordination point for shifting people and the country as a whole from cynicism to idealism. Yes we can! This is what makes him both appealing (people would rather be idealistic than cynical, and are drawn to someone who can help them shift) and creepy (people are rightly suspicious of the arational manipulation inherent in this dynamic). A cult is no longer a cult once a majority is won over.

Expand full comment

Eisegetes, did I ever say that we could get from these sub-optimal, but correct, beliefs, to the optimizing, but false (until they become true), beliefs by will? Nope. (The dating example was the closest case, but in all the others I added an external agent of belief change for a reason.)

Expand full comment

Okay, Paul: induce in yourself a belief that the sky is always, and has always been, green. And then get back to us.

Doxastic voluntarism has always been a very implausible epistemological position. Have you ever just voluntarily changed one of your beliefs without the input of any new information regarding that belief? It would be a very strange thing to do.

Note that none of your examples is a case of belief-by-will; in the driving case, the law causes the belief, because the law is a relevant fact that provides important evidence of the belief (law is to drive on the right, most people follow the law, ergo must people will drive on the right, so it is safer for me to do so as well). Evil demons don't really count either; there is it not our choice to believe, but rather, the result of an external intervention.

Here's another thing: my holding a belief has no causal effect on everybody else's likelihood of holding that belief, unless I communicate that new belief to them. But that communication would be a separate step in a game. Why couldn't I just maintain the belief that is more likely to be true, lie to the other game players, and wait to see whether they have adopted the belief that I currently pretend to hold?

In other words, your solution to the game involves an extraordinary amount of coordination by the players -- vis, they can all choose to hold a (currently false) belief, and will cooperate to do so in order to achieve a desired outcome. Well, if they are so good at cooperating, why couldn't they just cooperate on the basis of actions -- i.e., voting in your prefered way, driving on the left, or whatever -- and keep their beliefs to what is justified until there is actual evidence that supports changing them? Why is the one act of coordination more likely than the other?

Expand full comment

Paul, it's interesting that the model you propose works only as long as there aren't too many empiricists in the population. If enough people try to verify how others will act through observation, the whole system comes crashing down.

Most revolutions can probably be seen as a failure of a self-perpetuating belief system - if everyone thinks everyone else will act to maintain the system, even if only out of fear of punishment or retribution, it will always be too risky for anyone to rebel.

Expand full comment

Robin, I think I'm fairly well in line with the treatments of belief given by many others. I've often heard the governmental solution to coordination games being expressed in terms of changing beliefs about what other people will do, for example. (As in: the reason the law "drive on the right" works is because it induces in me a belief that others will drive on the right, and my best reply is to do what everyone else is doing.)

The crux is that the belief does lead to the outcome, because the belief is about what the other players are doing, and my best reply given belief A about what other players are doing is different than my best reply given belief B about what other players are doing.

Another example: Avner Greif's recent book, Institutions and the Path to the Modern Economy: Lessons from Medieval Trade -- which is just utterly brilliant, by the way, I highly recommend it to everyone -- analyzes institutions largely in terms of changing people's beliefs about the strategies of other players, and treats self-reinforcing (a term he invents, but a very useful one) institutions as those that support the maintenance of those beliefs in future rounds.

I see my example as just another case of that approach, and in fact it can be rewritten as the paradigm example of one of them. If everyone believes everyone else will drive on the right, everyone will drive on the right, even if the payoff for everyone driving on the left would (somehow) be higher. If Descartes's Demon gets in everyone's head and changes that belief to the belief "everyone will drive on the left," behavior will follow in due course. And thus, even though the belief was false when the Demon inserted it into our brains, our holding it (plus our responsiveness to incentives) brought it about that it was true subsequent to our holding it. Truth followed belief, rather than the other way around.

Expand full comment

Paul, you are still not taking seriously the idea that a "belief" is more than just some parameter you can assume anything you like about.

Expand full comment

And assume you form a belief about how many people are voting for RK vs HH depending on whether you are cynical or optimistic.

But this is just absurd, right? Who would form such a belief based solely on their cynicism or optimism? Normal people would rely on facts to at least some degree.

Expand full comment

Paul: voting is a terrible example of this phenomenon, since individual votes virtually never effect outcomes. Thus, there is very little reason to prefer any type of attitude in relation to voting (other than those that make you feel good) for the simple reason that your vote will have no practical effect.

That doesn't kill your thesis, though. Here's a better example: Foolish optimism about medical outcomes might well, through the placebo effect and other poorly understood relations between mental state and biophysical response to disease, actually improve your prospects of recovering from a dangerous illness. Someone who looks at the situation in an epistemically proper way might well decide that, since they will probably die, there is little cause for hope. Whereas a person who is foolishly optimistic might be able to improve their chances of recovery. We could call this an example of where a practice that is generally utility maximizing (that is, endeavoring to hold only justifiable true beliefs) is locally more likely to be harmful than helpful.

Then again, maybe the effect is smaller than we often suppose; it might be outweighed by the likelihood that foolish optimism will cause people to undergo painful treatments and disappointment, rather than seeking a peaceful and comfortable death.

You'll have to work harder to make a case that holding false-but-optimistic beliefs about politics will ever be likely to have a positive effect.

Expand full comment

Oh. For "their" read they, and set u(RK)>M(epsilon).

Expand full comment

Actually, I can do better than this, without mucking around with ignorance of payoffs and so forth. Think of it in terms of beliefs about what other players are doing.

Here's a toy game:

Imagine there are two candidates, the good and the evil one. (In order to be nonpartisan and yet somewhat aligned with likely reader preferences, we'll call the good one Ron Kucinich, and the evil one Hillary Huckabee.) Assume there's epsilon cost to voting, expected utility (I might be able to just use expected value here, but why not go all the way) for HH with probability 1 is 0, and expected utility for RK with probability 1 is >epsilon. And assume you form a belief about how many people are voting for RK vs HH depending on whether you are cynical or optimistic. Make it binary for simplicity. Let M be the number of people necessary for RK to win, and if RK doesn't win, HH wins. HH wins if nobody votes.

If you're optimistic, you believe that at least M-1 people are planning to vote for RK. If you're cynical, you believe that less than M-1 people are planning to vote for RK.

Suppose that at time t, everyone is cynical, so everyone believes that nobody else is going to vote for RK. Expected utility for going to vote = -epsilon. Nobody votes. Now suppose an Evil Demon convinces at least M people to be optimistic. Those M people think at least M-1 people are -- irrationally -- going to vote for RK. (But they don't know how many people have that belief, i.e. their have some positive probability of being the decisive voter.) Their expected utility for voting for RK becomes positive. The only thing that has changed is their belief about what other people are planning to do.

Yay for the evil demon!

Expand full comment

Robin & Caledonian, yes, but suppose you don't know the payoffs. We're leaving ordinary game theory land right now, but the intuition, I think, remains. I might be cynical, and cynicism might be correct, but *if* I knew the payoffs, and if enough people went along with me, I might prefer to act as if I were optimistic. (Indeed, I think one way to interpret cynicism about the political system is as not knowing the "fact" [if fact it be] that if everyone were optimistic, the world would be better.) Since I don't know the payoffs, I don't know to change my action. I need to be deceived.

Expand full comment

Eliezer:

But it is still interesting to see an example of a real-life scenario where the outcomes rewards false belief in that way. After all, hyperintelligent aliens with a penchant for tricky boxes are not very common in daily life, so it still makes sense to aim towards keeping your beliefs about the world correct, in the expectation that that will eventually do you some good. Whereas, if scenarios like the one Paul sketched are common, we might want to modify that goal (c.f. your slogan "rationality should win").

Expand full comment

"Believe sky is neon green: $100 payoffBelieve sky is blue: Boot to the head"I think we can assume an interaction between the belief and the environment before getting the payoff, so the above scenarios would not be valid examples.

Expand full comment