If elections aren’t a Pascal’s mugging, existential risk shouldn’t be either
A response I often hear to the idea of dedicating one’s life to reducing existential risk, or increasing the likelihood of a friendly artificial general intelligence, is that it represents a form of ‘Pascal’s mugging’, a problem memorably described in a dialogue by Nick Bostrom. Because of the absurd conclusion of the Pascal’s mugging case, some people have decided not to trust expected value calculations when thinking about about extremely small likelihoods of enormous payoffs.
While there are legitimate question marks over whether existential risk reduction really does offer a very high expected value, and we should correct for ‘regression to the mean‘, cognitive biases and so on, I don’t think we have any reason to discard these calculations altogether. The impulse to do so seems mostly driven by a desire to avoid the weirdness of the conclusion, rather than actually having a sound reason to doubt it.
A similar activity which nobody objects to on such theoretical grounds is voting, or political campaigning. Considering the difference in vote totals and the number of active campaigners, the probability that someone volunteering for a US presidential campaign will swing the outcome seems somewhere between 1 in 100,000 and 1 in 10,000,000. The US political system throws up significantly different candidates for a position with a great deal of power over global problems. If a campaigner does swing the outcome, they can therefore have a very large and positive impact on the world, at least in subjective expected value terms.
While people may doubt the expected value of joining such a campaign on the grounds that the difference between the candidates isn’t big enough, or the probability of changing the outcome too small, I have never heard anyone say that the ‘low probability, high payoff’ combination means that we must dismiss it out of hand.
What is the probability that a talented individual could averting a major global catastrophic risk if they dedicated their life to it? My guess is it’s only an order of magnitude or two lower than a campaigner swinging an election outcome. You may think this is wrong, but if so, imagine that it’s reasonable for the sake of keeping this blog post short. How large is the payoff? I would guess many many orders of magnitude larger than swinging any election. For that reason it’s a more valuable project in total expected benefit, though also one with a higher variance.
To be sure, the probability and payoff are now very small and very large numbers respectively, as far as ordinary human experience goes, but they remain far away from the limits of zero and infinity. At what point between the voting example, and the existential risk reduction example, should we stop trusting expected value? I don’t see one.
Building in some arbitrary low probability, high payoff ‘mugging prevention’ threshold would lead to the peculiar possibility that for any given project, an individual with probability x of a giant payout could be advised to avoid it, while a group of 100 people contemplating the same project, facing a probability ~100*x of achieving the same payoff could be advised to go for it. Now that seems weird to me. We need a better solution to Pascal’s mugging than that.