Discover more from Overcoming Bias
The Problem at the Heart of Pascal’s Wager
It is a most painful position to a conscientious and cultivated mind to be drawn in contrary directions by the two noblest of all objects of pursuit — truth and the general good. Such a conflict must inevitably produce a growing indifference to one or other of these objects, most probably to both.
– John Stuart Mill, from Utility of Religion
Much electronic ink has been spilled on this blog about Pascal’s wager. Yet, I don’t think that the central issue, and one that relates directly to the mission of this blog, has been covered. That issue is this: there’s a difference between the requirements for good (rational, justified) belief and the requirements for good (rational, prudent — not necessarily moral) action.
Presented most directly: good belief is supposed to be truth and evidence-tracking. It is not supposed to be consequence-tracking. We call a belief rational to the extent it is (appropriately) influenced by the evidence available to the believer, and thus maximizes our shot at getting the truth. We call a belief less rational to the extent it is influenced by other factors, including the consequences of holding that belief. Thus, an atheist who changed his beliefs in response to the threat of torture from the Spanish Inquisition cannot be said to have followed a correct belief-formation process.
On the other hand, good action is supposed (modulo deontological moral theories) to be consequence-tracking. The atheist who professes changed beliefs in response to the threat of torture from the Spanish Inquisition can be said to be acting prudently by making such a profession.
A modern gloss on Pascal’s wager might be understood less as an argument for the belief in God than as a challenge to that separation. If, Modern-Pascal might say, we’re in an epistemic situation such that our evidence is in equipoise (always keeping in mind Daniel Griffin’s apt point that this is the situation presumed by Pascal’s argument), then we ought to take consequences into account in choosing our beliefs.
There seem to be arguments for and against that position…
In its favor we can imagine situations where it’s not the nastiness of an all-knowing deity that makes our beliefs consequential, but something about our own psychologies. Imagine Allen. He’s an alcoholic. He makes an all-things-considered judgment that it would be best for him to stop drinking. He also holds the belief that the only way for someone with his psychological characteristics to stop drinking is to join Alcoholics Anonymous. Allen is also an atheist. However, he believes that if he joins Alcoholics Anonymous, his psychological characteristics are such that he will be induced by social pressure to believe in God. Because he’s an atheist, he believes that if that belief change happens, it’ll be because his reasoning process will be warped by social pressure, and his new beliefs will be false and (more importantly) unwarranted by the evidence.
Let’s assume that all of Allen’s present beliefs are warranted by the evidence — that they’re rational by the standards of belief that epistemically competent agents hold. Allen is, in effect, choosing to cause himself to adopt a belief that would be false and irrational by his current lights, in order to bring about better personal consequences. But it’s hard to call Allen’s decision wrong.
If we think that the belief in God is what causes AA to work — if we think it’s the belief itself that’s operative in bringing about the good consequence, then the AA question is structurally indistinguishable from the problem at the heart of Pascal’s wager: the problem of making our beliefs dependent on consequences, rather than just the evidence.
So it seems like the AA example gives us some reason to swallow Pascal’s wager, modulo the other objections (like a multiplicity of religions). But there are arguments on the other side. For one thing, again, remember that Pascal’s original argument suggests that the evidence is in equipoise. It’s somewhat plausible to think of consequences as a "tiebreaker" between beliefs that are uncertain in that way. But it’s less plausible to think that we can sensibly use consequences where evidence is not in equipoise. One major reason for this is that it’s totally unclear how we might relate consequences and evidence in one unified process of belief formation. For example, suppose that I think there’s a 70% change that P is true, but that my believing P is true will cause one puppy to die. Is the death of the puppy worth 20% + epsilon chance of truth, so that I should change my beliefs? How about two puppies? What if someone offers me one dollar? How about a million dollars? What’s the function to convert badness or goodness of consequence into weight of evidence?
This is a problem that’s very difficult, and I don’t purport to offer a solution. But we should think of it as a serious line of objection to the Pascal’s wager type of argument: if consequences are simply inadmissible in belief-formation processes, Pascal’s argument fails on the spot.
(This is a revised version of a post that I originally wrote a couple of weeks ago, which appears in its original form as a lengthy excursus on doxastic voluntarism on my personal blog, Uncommon Priors. If you’re interested, you might check that out, though it’s less sound, I think, than the current presentation.)