A response I often hear to the idea of dedicating one’s life to reducing existential risk, or increasing the likelihood of a friendly artificial general intelligence, is that it represents a form of ‘Pascal’s mugging’, a problem memorably described in a dialogue by Nick Bostrom. Because of the absurd conclusion of the Pascal’s mugging case, some people have decided not to trust expected value calculations when thinking about about extremely small likelihoods of enormous payoffs.
While there are legitimate question marks over whether existential risk reduction really does offer a very high expected value, and we should correct for ‘regression to the mean‘, cognitive biases and so on, I don’t think we have any reason to discard these calculations altogether. The impulse to do so seems mostly driven by a desire to avoid the weirdness of the conclusion, rather than actually having a sound reason to doubt it.
A similar activity which nobody objects to on such theoretical grounds is voting, or political campaigning. Considering the difference in vote totals and the number of active campaigners, the probability that someone volunteering for a US presidential campaign will swing the outcome seems somewhere between 1 in 100,000 and 1 in 10,000,000. The US political system throws up significantly different candidates for a position with a great deal of power over global problems. If a campaigner does swing the outcome, they can therefore have a very large and positive impact on the world, at least in subjective expected value terms.
While people may doubt the expected value of joining such a campaign on the grounds that the difference between the candidates isn’t big enough, or the probability of changing the outcome too small, I have never heard anyone say that the ‘low probability, high payoff’ combination means that we must dismiss it out of hand.
What is the probability that a talented individual could averting a major global catastrophic risk if they dedicated their life to it? My guess is it’s only an order of magnitude or two lower than a campaigner swinging an election outcome. You may think this is wrong, but if so, imagine that it’s reasonable for the sake of keeping this blog post short. How large is the payoff? I would guess many many orders of magnitude larger than swinging any election. For that reason it’s a more valuable project in total expected benefit, though also one with a higher variance.
To be sure, the probability and payoff are now very small and very large numbers respectively, as far as ordinary human experience goes, but they remain far away from the limits of zero and infinity. At what point between the voting example, and the existential risk reduction example, should we stop trusting expected value? I don’t see one.
Building in some arbitrary low probability, high payoff ‘mugging prevention’ threshold would lead to the peculiar possibility that for any given project, an individual with probability x of a giant payout could be advised to avoid it, while a group of 100 people contemplating the same project, facing a probability ~100*x of achieving the same payoff could be advised to go for it. Now that seems weird to me. We need a better solution to Pascal’s mugging than that.
The issue I see here, both with the mugging situation and the voting / dedicating one's life to a cause is that in the above no mention is given to the number of times one will be able to make such a decision. Expected values really only make sense in aggregates; so in the mugging situation Pascal would need to be reasonably certain to be presented with similar choices about a quadrillion times before making the bet would be a good choice to make.In dedicating yourself to a cause however, the aggregate does not come from one individual being able to dedicate their life multiple times over, but from many individuals doing so; the last paragraph above aludes to this, but does not accurately characterize the group of potential actors. It is not just a group of 100 people contemplating the same project that are relevant to the calculation, but actually all people on the planet capable of carrying out the same activity who may at some point be exposed to and join the cause ( there is of course a probability of occurrence for this as well ).
@dae236742d199236c0defc6f8379edda:disqus Stephen R. Diamond:
Well, I may be able to estimate how much of a wild guess the hypothesis is. I.e. if I am wildly guessing 9 digits number, that's one in a billion chance.
The problem with this is that you obtain an upper bound on probability, and the actual probability can be arbitrarily lower due to parts of the guess that you did not count.
With the pascal's mugging there is other aspect. A charity working on x-risk, or an approach to x-risk, may very easily be worse than just working and giving money to 1 randomly chosen person on this planet, or to random person with PhD in mathematics, or the like. The fact that someone tells you they are the best deal, is not necessarily *any* information that they are the best deal. If you have say 2..3% of psychopaths in society and several percent narcissists as well, and none of those folks want to work, it's clear that the people who can do something are grossly outnumbered by those who either have no moral qualms with telling what ever or have their self assessment hard wired to 'awesome'.
At this point it is not really about probability assessments even but about choosing effective strategies that elicit response. E.g. you can choose a strategy whereby the utility of some action would be positive for the real deal, but negative for the fake. You can require some definitely-non-bullshit achievements in mathematics or computer science. The real deal will have some from the time they were studying, or the time they were happily working on the AI being unaware of the risks. Really cheap to show. But it is not worth it doing such just for the sake of defrauding you - it is easier to e.g. increase reach and defraud the most gullible.