The Problem at the Heart of Pascal’s Wager

It is a most painful position to a conscientious and cultivated mind to be drawn in contrary directions by the two noblest of all objects of pursuit — truth and the general good.  Such a conflict must inevitably produce a growing indifference to one or other of these objects, most probably to both.

– John Stuart Mill, from Utility of Religion

Much electronic ink has been spilled on this blog about Pascal’s wager.  Yet, I don’t think that the central issue, and one that relates directly to the mission of this blog, has been covered.  That issue is this: there’s a difference between the requirements for good (rational, justified) belief and the requirements for good (rational, prudent — not necessarily moral) action.

Presented most directly: good belief is supposed to be truth and evidence-tracking.  It is not supposed to be consequence-tracking.  We call a belief rational to the extent it is (appropriately) influenced by the evidence available to the believer, and thus maximizes our shot at getting the truth.  We call a belief less rational to the extent it is influenced by other factors, including the consequences of holding that belief.  Thus, an atheist who changed his beliefs in response to the threat of torture from the Spanish Inquisition cannot be said to have followed a correct belief-formation process. 

On the other hand, good action is supposed (modulo deontological moral theories) to be consequence-tracking.  The atheist who professes changed beliefs in response to the threat of torture from the Spanish Inquisition can be said to be acting prudently by making such a profession.

A modern gloss on Pascal’s wager might be understood less as an argument for the belief in God than as a challenge to that separation.  If, Modern-Pascal might say, we’re in an epistemic situation such that our evidence is in equipoise (always keeping in mind Daniel Griffin’s apt point that this is the situation presumed by Pascal’s argument), then we ought to take consequences into account in choosing our beliefs. 

There seem to be arguments for and against that position… 

In its favor we can imagine situations where it’s not the nastiness of an all-knowing deity that makes our beliefs consequential, but something about our own psychologies.  Imagine Allen. He’s an alcoholic. He makes an all-things-considered judgment that it would be best for him to stop drinking. He also holds the belief that the only way for someone with his psychological characteristics to stop drinking is to join Alcoholics Anonymous. Allen is also an atheist. However, he believes that if he joins Alcoholics Anonymous, his psychological characteristics are such that he will be induced by social pressure to believe in God. Because he’s an atheist, he believes that if that belief change happens, it’ll be because his reasoning process will be warped by social pressure, and his new beliefs will be false and (more importantly) unwarranted by the evidence. 

Let’s assume that all of Allen’s present beliefs are warranted by the evidence — that they’re rational by the standards of belief that epistemically competent agents hold.  Allen is, in effect, choosing to cause himself to adopt a belief that would be false and irrational by his current lights, in order to bring about better personal consequences.  But it’s hard to call Allen’s decision wrong. 

If we think that the belief in God is what causes AA to work — if we think it’s the belief itself that’s operative in bringing about the good consequence, then the AA question is structurally indistinguishable from the problem at the heart of Pascal’s wager: the problem of making our beliefs dependent on consequences, rather than just the evidence. 

So it seems like the AA example gives us some reason to swallow Pascal’s wager, modulo the other objections (like a multiplicity of religions).  But there are arguments on the other side.  For one thing, again, remember that Pascal’s original argument suggests that the evidence is in equipoise.  It’s somewhat plausible to think of consequences as a "tiebreaker" between beliefs that are uncertain in that way.  But it’s less plausible to think that we can sensibly use consequences where evidence is not in equipoise.  One major reason for this is that it’s totally unclear how we might relate consequences and evidence in one unified process of belief formation.  For example, suppose that I think there’s a 70% change that P is true, but that my believing P is true will cause one puppy to die.  Is the death of the puppy worth 20% + epsilon chance of truth, so that I should change my beliefs?  How about two puppies?  What if someone offers me one dollar?  How about a million dollars?  What’s the function to convert badness or goodness of consequence into weight of evidence? 

This is a problem that’s very difficult, and I don’t purport to offer a solution.  But we should think of it as a serious line of objection to the Pascal’s wager type of argument: if consequences are simply inadmissible in belief-formation processes, Pascal’s argument fails on the spot. 

(This is a revised version of a post that I originally wrote a couple of weeks ago, which appears in its original form as a lengthy excursus on doxastic voluntarism on my personal blog, Uncommon Priors.  If you’re interested, you might check that out, though it’s less sound, I think, than the current presentation.)

GD Star Rating
Tagged as: , ,
Trackback URL:
  • Paul: For example, suppose that I think there’s a 70% change that P is true, but that my believing P is true will cause one puppy to die. Is the death of the puppy worth 20% + epsilon chance of truth, so that I should change my beliefs? How about two puppies? What if someone offers me one dollar? How about a million dollars? What’s the function to convert badness or goodness of consequence into weight of evidence?

    In cases like this, the fundamental model of an agent interacting with reality has broken down; an agent is supposed to have a mind with goals, and sensors and effectors. In order to be considered as such an agent, one’s interaction with the environment has to factor through one’s sensors and effectors, i.e. your thoughts mustn’t affect reality except by affecting what your body does. If this condition fails, then it can become impossible to act rationally.

    In pascal’s case, the notion of your thoughts being directly observable to God breaks the model. Something slightly different is wrong with Allen, and I would best describe it as a mental illness.

    Our ability to act rationally can be compromised if the privacy of our own thoughts is compromised, for example, I might hook you up to a brain scanner that scans your thoughts and then tortures you by producing exactly the worst outcome that you can think of. The more rationally you think (e.g. by thinking about how you might escape back to your family), the worse the outcomes will be for you (e.g. the machine captures and kills your family right in front of you)

  • Unknown

    Roko: “Your thoughts mustn’t affect reality except by affecting what your body does.”

    Since thoughts are based on some physical reality, everyone’s thoughts MUST affect reality in other ways besides affecting what your body does (i.e. for example, your thoughts cause (or are caused by) electrical activity and blood flow in your brain, physical actions you did not choose). So by your argument, our ability to act rationally is necessarily fundamentally compromised.

  • Stephen

    Can someone who is aware beforehand–as Allen is–that a decision will warp their reasoning ever sincerely commit to that decision? Wouldn’t he find the cognitive dissonance required for such a choice incessantly distracting? I think it would be strange if our epistemologies were really that malleable.

    Or are we talking about choosing to be effectively brainwashed? That’d be a big ol’ philosophical can of worms.

  • A footnote on AA: I realize this is framed in terms of Allen’s beliefs, but I also realize people tend to forget framing devices and sources of information. AA has a lot of local variation about religious belief. Some groups completely identify a “higher power” with God, while others leave it up to the individual. One solution for atheists is to identify the group as their higher power.

  • John Maxwell

    It seems that whenever you choose an action, you are going to indirectly change your beliefs because of availability bias. Different courses of action are going to expose you to different pieces of evidence.

    BTW, Pascal’s wager can be extended to multiple possibilities by Bayesian Decision Theory. Whenever you are faced with a number of mutually exclusive choices, you should choose the one that maximizes the expected value (likelihood times utility) not the choice that is most likely to be true. People who choose to join start ups are making this calculation, since most start ups fail.

  • conchis

    “If this condition fails, then it can become impossible to act rationally.”

    This seems true only if you reject Eliezer’s notion that rationality is fundamentally about winning, rather than about following a particular ritual of cognition.

  • @Unknown

    Many statements about ethics, values or agency become nonsense when we talk in terms of the actual laws of physics. But things make sense again when we talk about approximations to the laws of physics; in the usual approximate language that we use everyday, my thoughts only affect reality through my actions, because the minute changes that occur in my brain when I think are too small to see at the “everyday” level of approximation.

  • I like the way that the metaphor of Allen’s dilemma casts light on Pascal’s Wager but there’s an important distinction the metaphor doesn’t connect to. Pascal’s Wager is about hypothetical, unobservable consequences, while Allen’s problem is based on statistical evidence. Allen can look at the world and see how many people are beset by a similar problem, and how many of them are able to solve their problem with the help of AA. He can see how many become religious and what consequences this has on their subsequent life.

    As EY has pointed out several times, Pascal’s consequences are a stab in the dark and a hypothesis before evidence. It’s hard to take Pascal’s proposal seriously. The argument requires that we give the hypothetical infinite weight, but there’s still zero empirical evidence for it. Allen may be making the right choice, but Pascal was clearly wrong.

  • Larry D’Anna

    I don’t think the AA example does what you say it does. Allen is not choosing to believe in god because of the consequences of that belief. He is choosing a course of action (joining AA) based on the consequences of that action. He judges that the beneficial consequences (sobriety) outweigh the detrimental ones (inaccurate beliefs). When he actually changes his mind about god, he won’t be doing it for the consequences of the belief, he will be doing it because of the social pressure. And though he knows now that this will be his true reason for changing his mind; at the time he actually changes it he will come up with some rationalization that will obscure this fact.

  • Belief that in itself leads to certain outcome is action. To rationally determine actions, you need valid beliefs about their consequences.

  • Larry: take the modification I offered a few paragraphs in, and say that it’s the belief in God that makes AA efficacious.

    Nancy: thanks for the footnote — nothing I say here should be taken to imply anything about AA, it’s merely a convenient placeholder for a hypothetical case (if only because I don’t KNOW anything about AA in general — beyond reading the twelve steps online, I’m completely ignorant of how AA functions).

  • Larry D’Anna

    Paul: Ah, you’re right. If the belief itself is what makes AA work then it seems that that in this case that the act of adopting an irrational belief is a a rational action. But I think this scenario is of a different character than Pascal’s. Allen’s predicament arises out of his own weak abilities to either make rational choices as he is or to modify himself so he does better. If Allen could alter himself so he became violently ill as soon as he started drinking, this would be a better choice than adopting false beliefs to achieve the same end. In Pascal’s case the consequence-of-belief doesn’t arise out of Pascal’s own weaknesses, but out of the hypothetical scenario that God exists, cares about what Pascal thinks, and will read Pascals mind to find out.

  • L. Zoel

    It seems like this whole discussion is contradicting your previous insight that rational agents should always try to win (

    It seems to me that

    .7*(no puppies saved) < .3(1 puppy saved) and hence that we should opt for the latter, regardless of any silliness about what the "truth" is. The rational decision is the one that maximizes the number of (puppies saved), not the one that is most likely to be "true". When I read "Newcomb's problem.." my first thought was actually, "Doesn't this justify Pascal's wager?" Now you seem to be contradicting yourself and I'm not entirely sure why.

  • Alan

    Well taken point from NL on framing the terms of debate.

    The commonality between Pascal’s wager and the calculation by the subject, Allen, is that both appear to make their beliefs about individual future wellbeing dependent upon *imagined* consequences of presently formulated beliefs, rather than upon rational analysis of evidence.

    Pascal reaches his conclusions through deductive logic as well as intuition, while Allen consciously submits to one of a cluster of social constructions calculated to reinforce desired future outcomes. If doing so runs counter to his intuition, then his case is already distinguishable from that of Pascal.

    At a less abstract level, Pascal’s purpose was to dissuade contemporary nonbelievers from shrinking from religion out of fear of its truth, and rather to have them approach it out of hope that it is so. Nowhere does he portray himself as an unbiased observer. He concludes his wager section by observing to the effect that he would be more worried about being in error and then discovering [his theistic beliefs] to be true after all, than not being in error while believing them true. So, for Pascal and Allen, their underlying purposive action appears to be to generate beliefs which serve as emotional markers toward something else, rather than to discover truths.

    Seen another way, may not a central belief such as the one under discussion, formed through conscious will, be viewed metaphorically as a compass? While a compass does not in a physical sense steer a ship, a series of successive moves of the rudder can be made by reference to readings of the compass. Is the activity of Pascal and Allen much different from fashioning compasses by which to steer? I don’t know.

  • sonic

    “if consequences are simply inadmissible in belief-formation processes”
    then what possible objection could be raised to someone holding any belief whatsoever?

  • If the belief itself is what makes AA work then it seems that that in this case that the act of adopting an irrational belief is a rational action.

    If we must determine how we believe regarding the program before we can determine its effectiveness, its effectiveness is undefined until we reach a conclusion. Given the assumption that we can choose our beliefs, the ‘effectiveness’ is a cipher, a null and empty variable. What matters is the choice – that leads directly to probable outcomes.

    If choosing A leads to a greater probable outcome than ~A, we simply choose A.