If elections aren’t a Pascal’s mugging, existential risk shouldn’t be either

A response I often hear to the idea of dedicating one’s life to reducing existential risk, or increasing the likelihood of a friendly artificial general intelligence, is that it represents a form of ‘Pascal’s mugging’, a problem memorably described in a dialogue by Nick Bostrom. Because of the absurd conclusion of the Pascal’s mugging case, some people have decided not to trust expected value calculations when thinking about about extremely small likelihoods of enormous payoffs.

While there are legitimate question marks over whether existential risk reduction really does offer a very high expected value, and we should correct for ‘regression to the mean‘, cognitive biases and so on, I don’t think we have any reason to discard these calculations altogether. The impulse to do so seems mostly driven by a desire to avoid the weirdness of the conclusion, rather than actually having a sound reason to doubt it.

A similar activity which nobody objects to on such theoretical grounds is voting, or political campaigning. Considering the difference in vote totals and the number of active campaigners, the probability that someone volunteering for a US presidential campaign will swing the outcome seems somewhere between 1 in 100,000 and 1 in 10,000,000. The US political system throws up significantly different candidates for a position with a great deal of power over global problems. If a campaigner does swing the outcome, they can therefore have a very large and positive impact on the world, at least in subjective expected value terms.

While people may doubt the expected value of joining such a campaign on the grounds that the difference between the candidates isn’t big enough, or the probability of changing the outcome too small, I have never heard anyone say that the ‘low probability, high payoff’ combination means that we must dismiss it out of hand.

What is the probability that a talented individual could averting a major global catastrophic risk if they dedicated their life to it? My guess is it’s only an order of magnitude or two lower than a campaigner swinging an election outcome. You may think this is wrong, but if so, imagine that it’s reasonable for the sake of keeping this blog post short. How large is the payoff? I would guess many many orders of magnitude larger than swinging any election. For that reason it’s a more valuable project in total expected benefit, though also one with a higher variance.

To be sure, the probability and payoff are now very small and very large numbers respectively, as far as ordinary human experience goes, but they remain far away from the limits of zero and infinity. At what point between the voting example, and the existential risk reduction example, should we stop trusting expected value? I don’t see one.

Building in some arbitrary low probability, high payoff ‘mugging prevention’ threshold would lead to the peculiar possibility that for any given project, an individual with probability x of a giant payout could be advised to avoid it, while a group of 100 people contemplating the same project, facing a probability ~100*x of achieving the same payoff could be advised to go for it. Now that seems weird to me. We need a better solution to Pascal’s mugging than that.

GD Star Rating
loading...
Tagged as: , , , ,
Trackback URL:
  • Tim B.

    I don’t vote because I reckon that I simply can’t tell who I should vote for. So what is the expected value of voting then? I don’t know.

    Existential risk mitigation seems orders of magnitude more difficult than figuring out what party to vote for in an election. Especially if you are not smart and barely educated like me.

    Given those caveats, what is the expected value of thinking about existential risks, let alone contributing money? 
    Similarly you could ask me if Shinichi Mochizuki proved the ABC conjecture. And even if you told me I had no way to verify if you are right. The best I could do is to trust a consensus of people who are apparently experts. But there is no consensus about existential risks. Not even over at lesswrong.com as indicated by the most recent survey…

  • Cambias

    There’s the problem of limited knowledge. Jenny McCarthy and other antivaccine activists believed that the MMR vaccine (and by extension other vaccines) was a serious threat. She and others made a tremendous effort to “raise awareness” of this threat. And the whole thing turned out to be a fraud.

    One can think of a host of other existential threats which turned out to be wrong, or much less of a problem than initially supposed. If highly-motivated individuals devote all their energies to fighting each one — you get the modern political system.

  • Jim

    I have never heard anyone say that the ‘low probability, high payoff’ combination means that we must dismiss [voting] out of hand.

    I’ll say it. I say it all the time. Voting is a religious sacrament.

    • Hedonic Treader

      No, it isn’t. It’s taking a walk to the voting booth on a nice day to get some fresh air. And maybe have some coffee with people you meet there. And while you’re on the way, you can shift expected values for political outcomes, which are actually real despite the small probabilities.

      • Thom Blake

        That’s a very surprising take.  Around here, voting is driving through terrible traffic to go to a place with insufficient parking to participate in a taste of soul-crushing bureaucracy.

      • Hedonic Treader

        Huh. I actually can walk to the voting booth in 15 minutes through a nice neighborhood in which I often take walks for fun. I guess it really depends on where you live.

        At any rate, people here who talk about the irrationality of voting should consider how much time they are spending on that discussion, compared to actually voting.

  • http://twitter.com/_Nevermind Nevermind

    Umm Pascal’s Mugging can be trivially solved if we suppose that probability decreases with promised payout. Intuitively, this seems right: I’d estimate probability of a mugger returning 10000 livres significantly lower than just 50 livres.
    That reasoning doesn’t apply to voting, or existential risk reduction, though, because there’s no escalation of payout there.

    • gwern0

       A post hoc excuse is not a solution. What *principles* led you to scale probability by payout?

      • http://twitter.com/_Nevermind Nevermind

        What principles led me to assign low probability in the first place? 
        The default is for probability to depend on payout, as different payouts are essentially different events. It’s constant probability that needs explaining.

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

         The principle that gets Pascal’s Mugger on the road is that people tend to tell the truth. Is the principle that small investments tend to produce small payouts less reputable?

        Perhaps, in that the reason the truthfulness principle tends to work has some sort of biological explanation. What kind of explanation do we have for the payout principle? Let me suggest entropy for a candidate.

      • dmytryl

        I think in part the problem with Pascal’s Mugging is incompatibility of how we parse strings with naive expected utility maximization. The probability of the hypothesis that giving this stranger money would save the world, has to jump from ‘never considered before’ to some defined value.

        To get to the core of this part of the issue, the way LessWrong describes it on an article linked from the about page, “We can measure epistemic rationality by comparing the rules of logic and probability theory to the way that a person actually updates their beliefs.” . This is rather naive. The very presence of things you never thought about grossly violates the ‘rules of logic and probability theory’. The parsing of strings into considerations and thoughts violates it further. To obtain useful information about the world in the end, you have to apply a lot of highly complicated approximate corrections. Especially when it comes to statements parsed from potentially hostile sources.

        I do not know how do rationalists picture ‘updating’ of beliefs. If you want to propagate from some nodes A and B to node C you need to compute cross-correlation between A and B and eliminate any feedback through C. For this you need a lot of information that is usually absent. E.g. if you are ‘updating’ on the fact that in 10 trials the drug has not killed the patient, you need information on how those trials were picked (10 survivors out of a million, or 10 survivors out of 10). Such information is often not available, yet rationalists claim to update beliefs anyway. Belief propagation is only simple when you are dealing with a tree; when you are dealing with arbitrary graph, it is NP complete, and if you want to be more accurate you need to literally invent better approximate algorithms; replacing some few bits of existing algorithm with idealizations would be definitely harmful, even though in the descriptive sense the algorithm may appear to be less wrong (as fewer bits of it would mismatch the elementary probability theory). And of course the proper way to test something complex is to look and see if it forms better beliefs about not yet revealed parts of the world which are available for measurement.

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

        The very presence of things you never thought about grossly violates the ‘rules of logic and probability theory’.

        One possibility I’ve never thought about in a coin flip is that the coin might disintegrate in midair. But why haven’t I reconciled my omission with probability theory merely by including a category: “neither heads nor tails,” which includes the coin landing on its edge as well as events like disintegration that I hadn’t considered?

      • dmytryl

        Well, in the coin flip the outcomes that you didn’t think about are relatively minor part of it. In things like futurism, one can hardly even claim to have thought properly through a single outcome; it’s like throwing an extremely narrow cylinder onto some plane, and expecting it to land on it’s end and remain standing; not only it is unlikely that you guessed how cylinder would land, you didn’t even consider that the plane may be inclined in which case it won’t be stable standing. Except with a zillion such possibilities. Probabilities of correctness of reasoning via ‘i can’t think of counterargument’ fall off exponentially and can be truly mindbogglingly low.

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

        You’re gifted with an extraordinarily long life, and you’re offered a bet on some grand futuristic hypothesis. How would you rationally decide whether to take it? Or is rationality completely inapplicable to such a decision (given your view that the probability idealization completely breaks down)?

      • dmytryl

        @dae236742d199236c0defc6f8379edda:disqus Stephen R. Diamond:

        Well, I may be able to estimate how much of a wild guess the hypothesis is. I.e. if I am wildly guessing 9 digits number, that’s one in a billion chance.

        The problem with this is that you obtain an upper bound on probability, and the actual probability can be arbitrarily lower due to parts of the guess that you did not count.

        With the pascal’s mugging there is other aspect. A charity working on x-risk, or an approach to x-risk, may very easily be worse than just working and giving money to 1 randomly chosen person on this planet, or to random person with PhD in mathematics, or the like. The fact that someone tells you they are the best deal, is not necessarily *any* information that they are the best deal. If you have say 2..3% of psychopaths in society and several percent narcissists as well, and none of those folks want to work, it’s clear that the people who can do something are grossly outnumbered by those who either have no moral qualms with telling what ever or have their self assessment hard wired to ‘awesome’.

        At this point it is not really about probability assessments even but about choosing effective strategies that elicit response. E.g. you can choose a strategy whereby the utility of some action would be positive for the real deal, but negative for the fake. You can require some definitely-non-bullshit achievements in mathematics or computer science. The real deal will have some from the time they were studying, or the time they were happily working on the AI being unaware of the risks. Really cheap to show. But it is not worth it doing such just for the sake of defrauding you – it is easier to e.g. increase reach and defraud the most gullible.

    • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

      That reasoning doesn’t apply to voting, or existential risk reduction, though, because there’s no escalation of payout there.

      I think it does apply, unless I misunderstand. If your vote saves the world, you’ve obtained a huge payout, which it can be argued justifies the effort despite the small probability.

      • http://twitter.com/_Nevermind Nevermind

        Pascal’s Mugging relies on the fact that the mugger is free to increase future payout until expectation of it is big enough. When payout grows unbound, it can make ANY decision desirable (provided that probability of said payout does not decrease).
        In case of real-world situations like voting, payout is fixed, or in any case bound.

    • Robert Wiblin

      Just seems like an ad hoc solution you would never naturally go for if you weren’t contriving to to avoid the mugging outcome.

      • http://twitter.com/_Nevermind Nevermind

        Well, we started with some assumptions about our intuition (the probabilities we intuitively assign in a mugging situation), applied logic and came to a conclusion that contradicts our intuition. Either our logic is flawed, or assumptions are incorrect. What I’m doing is challenging an assumption – namely, that we assign equal probabilities to all payouts. It’s obivously _a_ solution; one might argue that it’s not a _good_ solution, but  it doesn’t seem contrived to me. Anything being constant is a special case, most values in the world are actually fluctuating and interdependent.

        Also, when _I_ read about Pascal’s Mugging, this was actually the first thing that came to my mind: why is the mugger silently assuming that expectation of future payout grows with mugger’s suggestions?

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

        Also, when _I_ read about Pascal’s Mugging, this was actually the first thing that came to my mind: why is the mugger silently assuming that expectation of future payout grows with mugger’s suggestions?

        Same here. I don’t find the solution contrived at all. Rather, it seems pretty obvious.

    • http://twitter.com/_Nevermind Nevermind

      And here’s another assumption challenged, just for the sake of completeness: the mugger silently assumes that Pascal’s utility function is unbounded, but that might not be the case. Either because Pascal, being humanl, can only be happy so much, or because he, being human, can only imagine so much happiness. In any case, the mugger can’t increase expected payout past a certain point, even if he can promise any finite amount of happy days.
      I think this explanation is more in line with how people actually think (whether it’s rational or not is another matter).

    • Bo

      Then you could change the probability of something by changing your values. 

  • Robin Hanson

    I’d think the main issue would be having some basis, any basis, for the probability estimates other than an emotional feeling that its the sort of problem that someone should do something about, and therefore that the probability must be high enough to justify that. With voting we have good models for calculating the chance of being pivotal. With some existential risks we also have concrete ways to calculate risks. But I’d accept subjective probability estimates by people not involved with or strong supporters of trying to mitigate those risks.

    • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

       In addition to emotional feelings, these estimates would seem to be strongly influenced by one’s theoretical account of the process of induction. Pascal’s Mugger is generated by the assumption that Occam’s Razor can be formalized as abstract simplicity, when there’s no such thing as simplicity in the abstract: simplicity is always relative to a language, there’s no absolute language, and our plausibility estimates (priors) are necessarily instead based on “fast and frugal heuristics.”

      Pascal’s mugging is a reduction to the absurd of “Solomonoff Induction.” It’s revealing that many “rationalists” refuse to recognize it for what it is–or even see the threat to this cherished belief. They treat it as an anomaly, whereas Pascal’s mugging goes to the heart of what’s wrong with modern inductivism.

      • dmytryl

        Precisely. The linear value of a scenario of length L grows in-computably faster than 2^L , whereas the improbability under S.I. falls as 2^-L , i.e. the sums simply don’t converge.

        The issue primarily arises from some sort of misunderstanding of the theory. Some rationalists for some reason expect that if they maximize products involving some number they made up, they’ll get something good for this (after calling this number a probability). Universe doesn’t care what you call this number, and doesn’t give cookies for effort. It may be example of power of names. When you give a powerful name to made up scheme (Solomonoff Induction), it acquires magical power in minds of people. Maybe because majority of named insights are important.

      • http://www.facebook.com/joao.lourenco.90410 João Lourenço

        This seems like a very reasonable argument against Solomonoff induction, but this is the first time I read someone talking about this. Could you link me to some paper/post/etc on the topic?

      • dmytryl

        João Lourenço : I came up with this one myself a while back (it seems rather obvious) and I recall that later in one of Hutter’s papers he mentions without going into much detail that the unbounded utility probably won’t work. I think I have seen a paper concerning specifically this topic but the name evades me.

        I thought some more of this matter and ultimately the issue is that the strong intuition that ‘expected utility maximization is good for you’ is resting upon some intuitive notion of objective-ish probability, such as the one governing orientation of objectively symmetrical die that has bounced sufficiently many times, for which you can reason correctly about the probability without being able to predict outcome of the toss. Ditto for pseudorandom number generators which too employ symmetries. This is an extreme exception, not the rule. Nature does not make fair dies.

        Why do we even expect to have something good to come from maximization of ‘pick made up numbers as probabilities, update using other made up numbers in Bayesian manner’? Looks like cargo cult behaviour to me. All possible justifications would be tautological. Mathematics is strictly garbage in garbage out process. Put the result of watching Terminator when little and reading fiction through Bayes rule and there’s no word for what you get in result, but if you think in words you might just jump towards the word ‘probability’.

        Especially as the most avid proponents of
        rationalism right now (top people at CFAR) are also the people who don’t seem to have faced the necessity to understand the topic coherently enough for non-trivial use of the relevant mathematics in the context of workplace, contest, or even an exam.

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

         It might be worth remarking that advocates of the Many Worlds Interpretation of quantum mechanics sometimes (often?) rely on Solomonoff Induction to maintain that positing an uncountably infinite set of worlds is parsimonious.

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

        The issue primarily arises from some sort of misunderstanding of the theory. Some rationalists for some reason expect that if they maximize products involving some number they made up, they’ll get something good for this (after calling this number a probability).

        I think the misunderstanding is driven primarily by a faith in foundationalist epistemology, of the kind Richard Rorty rubbished in “Philosophy and the Mirror of Nature.”

        But perhaps it isn’t surprising that rationalists who are computer programmers by trade would think computer code is the fundamental language of the universe. (On occupation and ideology, see my The practical basis for mass ideologies: Construal-level theory of ideologies meets habit theory of moralityhttp://tinyurl.com/6uqusqc .

      • dmytryl

         @ Stephen R. Diamond

        ohh, that’s my pet peeve with the advocate you are speaking of and the repeaters club. MWI is not even a valid code in Solomonoff Induction. You only deal with codes whose outputs begin with the data (not contains), and for a very simple reason. Seriously, how hard it can be to think of code that counts from 1 to infinity? It’s not even that they are promoting misconceptions, it’s that those misconceptions are not even smart.

      • http://www.facebook.com/people/Jorge-Emilio-Emrys-Landivar/37403083 Jorge Emilio Emrys Landivar

        Its *much* worse than you say.  Bayesianism is impossible, because our priors themselves filter our updating.  This means if we have certain priors, we *cannot* inductively get from certain point As to certain point Bs.

        This means certain events are in your mind given not a small probability, but a *zero* probability.  This incidentally also solves the problem of Pascal’s Mugging.

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

         Jorge Emilio Emrys Landivar,

        Could you perhaps provide some priors and show how this works in relation to Pascal’s Mugging? I don’t see how a Bayesian analysis can assign certain events a zero probability.

        (I agree that in an ultimate sense, Bayesianism is “impossible.” But my reason is that the Bayesian framework itself (and related logical truths) must (in effect) be assigned a probability of 1, which is wrong where probability is conceived of degree of belief. We can and do doubt analytic claims.)

      • http://www.facebook.com/people/Jorge-Emilio-Emrys-Landivar/37403083 Jorge Emilio Emrys Landivar

        Funny example with priors:
        There is no god and hearing voices means you are crazy with a high percent probability.
        Talk to any atheist and ask them what they would do if a voice from the sky told them to believe in the God?
        They will answer that they would think themselves insane and try to get it fixed.  This means that with certain collections of priors the human answer to someone who is *trying* to be Bayesian   often results in answers that get stuck, because the filtering mechanism perceptions is filters data before it updates.

        In order for us to be Bayesian and not get stuck-priors we have to have raw access to the data which we use to update our priors. Unfortunately for most important choices, this is impossible, because our minds always filter and categorize the data we receive.

    • Michael Vassar

      Would you accept estimates that supporters of risk X make of efforts to mitigate risk Y?

      • Robin Hanson

        I guess it would depend on how closely related are those groups.

    • dmytryl

      Precisely.

      A made up number is not probability and doesn’t abide rules of probability theory (e.g. the probabilities you assign to all possible outcomes will likely not add up to 1), calling it a probability and then calculating as if it was probability, expecting to win more, is just cargo cult behaviour – building an imitation of runway and expecting planes to land.

      Furthermore, the computational process that produces actual probability in this case would be vastly, vastly more expensive than building an AI or FAI. That’s the thing. To find the probability you need to evaluate all the possibilities.

      We are dealing with the kind of uncertainty that is not accurately captured by the notion of probability. Most importantly it won’t add to 1. This throws all the ‘expected value’ calculations out of the window. These numbers, let’s call them ‘degrees of my feeling’ (DMF), are not probabilities in any sense of the word (neither frequentist nor bayesian), and there is no reason what so ever to expect the products of DMFs with the values of outcomes to behave anything like expected values, nor is there any reason to expect anything good to come out of choosing actions for which DMF*outcome is maximal.

      Bottom line is, those numbers can be anything – a function of the age at which you first watched Terminator, the science fiction you have read, strength of fear related connections in your brain, and other innate tendencies, or they can simply be random – but there’s one thing those numbers can not possibly be – actual probabilities of the AI destroys the world scenario under any meaning of the word probability. There’s simply no way for those numbers to have any connection with this scenario actually occurring. All such connections are too expensive for you to evaluate in any manner, subconsciously or consciously.

      • rationalist

        This sounds like a complete rejection of probabilistic reasoning. Which is fine. I just want to point out that the issues you bring up occur whenever a human being tries to come up with an explicit probability for anything practical (so obviously you can reason correctly about a six sided dice or a deck of cards)

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

          I just want to point out that the issues you bring up occur whenever a human being tries to come up with an explicit probability for anything practical

        Most people don’t reason about practical choices by coming up with explicit probabilities, the main reason probably being that we’re highly unreliable in making such estimates.

    • Anonymous

      “But I’d accept subjective probability estimates by people not involved with or strong supporters of trying to mitigate those risks.”
      That would select for the less worried who didn’t see the need to do anything.

  • Zoe

    Is the chance that their vote might swing the election really the reason most people vote?  To the extent people think voting through, it seems their justification is more likely to be a combination of Kant’s categorical imperative & support for a democratic cultural ritual.  And if you believe in the platform of a particular political party, the benefit of volunteering for a campaign is more than just the chance of swinging a particular election as well.  You’re trying to build awareness of the issues, change minds, and convince people to become more engaged citizens on an ongoing basis.

    • Robert Wiblin

      It isn’t but that doesn’t mean it couldn’t be a good reason.

  • James

    “The US political system throws up significantly different candidates”WHAT?!

  • http://www.facebook.com/profile.php?id=723726480 Christopher Chang

    Campaigns are relatively well-understood–there have been enough elections that a strategist can do a reasonable job of estimating the impact of a marginal dollar on their favored candidate’s chances.  This knowledge drives the recruitment of a nonzero, not-totally-ridiculous number of campaign volunteers on each side of a contest.

    In contrast, we don’t have much history of losing against existential risks.

    • Robert Wiblin

      Is the objection to Pascal’s mugging not the low probability, but the high uncertainty about the probability?

  • http://mugwumpery.com/ mugwumpery.com

    The solution to Pascal’s mugging (and so indirectly, the election problem) is recognizing the hidden costs.

    The reason Pascal should not hand his wallet to the mugger is that if he accepted all such arguments, he’d hand over ALL his money and still have an infinitesimally-low probability of any payoff at all. If he lives long enough (forever), he’ll hand over an infinite amount of money in return for a probable payoff of zero.I think there are infinities on both sides of the scale. But the contents of Pascal’s infinite wallets (over an infinite lifetime) are more valuable than the infinite payoff multiplied by the infinitesimal probability of payoff.

    (I’m no mathematician, but I can count.)

    • gwern0

      > The reason Pascal should not hand his wallet to the mugger is that if he
      accepted all such arguments, he’d hand over ALL his money and still
      have an infinitesimally-low probability of any payoff at all.

      An astronaut, having sold all his worldly goods in preparation for his suicidal one-way mission to Mars, is walking to his rocket with the money in his pocket because he was too busy to spend it all; he is accosted by a representative of GiveWell’s top charity and a mugger who both ask for all his money since he no longer needs it. He reasons that between his suicidal mission and imminent poverty, he need not fear any future decision theoretic consequences of his donation, and gives it to the mugger…

      • Carl Shulman

        GiveWell’s top charity has some effect, positive or negative, on existential risk. GiveWell would claim the sign is positive (i.e. reducing risk), via routes such as slightly increasing global GDP. The mugger from Bostrom’s paper has overwhelming evidence against him (e.g. trying to rob you by force before coming up with the alternative). If you think the sign of GiveWell’s top charity’s effect on existential risk is favorable at all, then it will win over something that has such odds against it.

        If it were malaria relief vs DNA vaccines or seed banks or some more plausible route to existential risk reduction, then it would depend on the details of the evidence and one’s valuation of extreme outcomes.

      • gwern0

        I don’t see what that much as to do with my refutation of mugwumpery’s claim that you shouldn’t give solely because  it encourages other people to mug you…

        Besides that, I’m not sure what argument you’re making. The whole point of the mugging, as made very clear in Bostrom’s dialogue & Baumann’s reply, is that the mugger sets his offered reward at *whatever is necessary to overcome the overwhelming evidence*.

    • Robert Wiblin

      I think this looks like a fruitful line of thinking. You can get a higher probability of an enormous payoff some other way.

      • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

         Gwern’s thought experiment shows this approach fails.

      • Carl Shulman

        See my reply to that comment.

    • Carey

      This reasoning displaces ordinary muggings with a new one. If we succomb to muggings, we should now take every possible measure to prolong our lives, no matter the harm that we thereby cause.

  • Arthut

    Voting and political campaing isn’t about elections

  • Joachim Schipper

    Note that volunteering for a political campaign may have nicer side-effects than trying to save the world in a “weird” way; since the risk-adjusted utility of swinging the election doesn’t seem to be *that* great (chiefly because the probability is, as you point out, very small), the side-effects arguably dominate the subjective utility calculation.

    (Or, in English, “people volunteer because it gets them into cool colleges/jobs/circles, not to win elections”.)

  • gwern0

     > You may think this is wrong, but if so, imagine that it’s reasonable for the sake of keeping this blog post short.

    Elections and existential risk: one is a centuries old adversarial tradition with precise endgoals and quick feedback, to which cutting-edge relevant research is applied thanks to literally billions spend each year (just for the Presidential election) by hundreds or thousands of distinct interest groups, the mere media coverage of which dominates news for a year and more in advance of the actual event such that pretty much everyone in the country can tell you how things are going for each candidate. The other is none of those.

    So I think I will object: if elections are isomorphic to existential risk, then they are a *compellingly bad thing to bother working on*.

  • http://newstechnica.com David Gerard

    This would certainly be a valid comparison if all votes were independent actions that did not interact in any way. But this does not in fact hold for actual voters and movements to get out the vote for a viewpoint or candidate.

  • Sieben

    Pascal’s mugging can be resolved. The problem is that Pascal is really myopic. If you consider all the far-fetched probabilities, like maybe the opposite will happen if you hand over your wallet, or maybe you’ll get a quadrillion utils regardless of what you do, then you don’t let yourself get mugged.

    If you make decisions where you have good knowledge of probabilities, Pascal works. If you do not have good knowledge of probabilities, it still works so long as you thoroughly outline all the knowledge you do not have.

  • Lawrence D’Anna

    What about the Nassim Taleb critique?  That as probabilities get very small, it becomes very hard to know their value (or even the logarithm of their value).

    • Michael Vassar

      He argues that they tend to be underestimated though.

  • http://www.facebook.com/eric.hammer.752 Eric Hammer

    With poltics, there might also be the question of some people really believing that the chance of their candidate improving the world is close to 100% if he wins election. Most people seem to have a fairly strong faith that the politicians on their team are the “good guys” and those on the opposing at “bad guys,” with a little variation in degree. I suspect that sorts of people that volunteer for for campaigns then are either the true believers, or the cynics who just want it on their resume.

    • Carl Shulman

      The belief that one party will improve the world *by your own lights and loyalties* is easier to justify than the claim that one party is better in some objective impartial sense. If people have loyalties to groups that are not coextensive with their electoral jurisdictions, and these loyalties would not dissolve in reflective equilibrium, then a party that systematically favors those groups can do better by those standards.

      Of course, this is just the description of politics as ritualized civil war: mutual arms reduction (C-C) would be better than escalating conflict and defection.

  • Miley Cyrus

    “What is the probability that a talented individual could averting a
    major global catastrophic risk if they dedicated their life to it? My
    guess is it’s only an order of magnitude or two lower than a campaigner
    swinging an election outcome. ”

    It’s probably an order of magnitude higher. One person donating $10,000/yr (easy to do if you’re frugal and moderately talented) to the SIAI can increase it’s budget by 1.5%. A $10,000 political donation would only constitute .00017% of the 2008 campaign.

    • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

      It’s probably an order of magnitude higher.

      Too bad we can’t do a prediction market! That lacking, perhaps we should average Robert’s and Miley’s estimates. :)

    • Hedonic Treader

      You’re implying that the SIAI can actually reduce existential risk if only it has more funding. Do you have evidence of that?

    • Carl Shulman

      Aggregate government spending on biosecurity and nuclear nonproliferation efforts is measured in billions per year. Millions per annum are spent on asteroid tracking. 
      http://www.beguide.org/nuclear-non-proliferation.html

      The Nuclear Threat Initiative operates on the tens-to-hundreds of millions of dollars scale. Expenditures directed at global catastrophic risks (not necessarily for their existential risk or even global risk components, as opposed to national level components, but still) are substantially larger than US Presidential election spending.

      An argument like the one you’re making would have to be that some particular interventions for particular risks are neglected relative to the expected marginal effectiveness of operating on them now. You can argue that GCRs are neglected relative to their importance, but they are not ignored.

      • Miley Cyrus

        The quote was referring to “a global catastrophic risk”, not “global catastrophic risks in general.”

  • dmytryl

    I don’t think there is any reason to describe such numerology as calculations in the first place, and their outcome as expected value.

    The issue is that you make up a number and call it a probability and expect that you will ‘win’ more if you treat it like probability. But it was never probability – it was a made up number – the made up numbers for everything do not follow rules of probability theory (most dramatically, you have no way of ensuring that they add up to 1), and there is no reason to expect any improvement in anything from maximization of function of made up numbers.

    Does it make sense to act upon made up and entirely unjustified numbers? No. Does it make sense to act upon a function of a made up unjustified number? No again.

    It only seem to make sense when you deceive yourself with regards to the nature of the values that you calculate. It is a bit like cargo cult science. The science calculates, say, the electrical properties of silicon for sake of building your computer. The life is full of examples of useful calculations like this. And so you think, I will build this runway that looks very much like the real runway, and the cargo planes will land. Except it is all wrong and there’s far more to having those planes land.

    • eb

      “I don’t think there’s any reason to describe such inferomancy as deductions in the first place, and their outcome as logical consequences.

      The issue is that you make up a bit and call it a truth value and expect that you will ‘win’ more if you treat it like a truth value. But it was never a truth value – it was a made up bit – the made up bits for everything do not follow rules of propositional logic  (most dramatically, you have no way of ensuring that they are consistent), and there is no reason to expect any improvement in anything from acting on the basis of a deduction from made up propositions.

      Does it make sense to act upon made up and entirely unjustified propositions? No. Does it make sense to act upon a deduction from a made up unjustified proposition? No again.”

      Point: when you use propositional logic to help you think about real-life matters, your premises are “unjustified” and “made up” in exactly the same way as your probabilities are unjustified and made up when you use decision theory for the same purpose. This doesn’t mean that you should never use propositional logic.

      • dmytryl

        That’s the cargo cult mentality in a nutshell: focus on the superficial similarities between your runway and the real runway, along with motivated blindness about the differences. How did you think the cargo cults work? They literally can’t (or don’t want to) understand the crucial difference between their imitation of runway and the real runway. They say, okay we agree that the colour of the runway is a little off but we are working on it. Being entirely oblivious to the whole enormous logistics of actually setting up a real airport.

        The logic has been tested to work and found to be useful. It has been found that it doesn’t arrive at contradictory conclusions in practice, despite much work to find those.

        None of that is true about the reasoning behind the ‘estimates’ of the risk from rogue AI. It is as distant from the real reasoning as cargo cult’s runway is distant from a real airport.

        Most obvious test, though, is this: the people behind this whole silly exercise claim superior rationality to most scientists, they claim to see things that scientists miss. The cargo cult runway claims superior headphones. Well let’s listen to those headphones. Utter silence – it is just a wooden imitation. They can’t sell those headphones for audiophiles. The headphones are for this pseudo runway only.

        Likewise, superior rationality (especially epistemic rationality) should result in testable hypotheses about the real world, which can be put to experimental tests. None of that comes out.

  • http://www.facebook.com/people/Kim-Øyhus/1275353424 Kim Øyhus

    Saturation is an answer.
    To quote a few rich people I know:”You can only get satiated.”
    I think this translates into a maximum of 100% probability, when one formulate Pascals muggings into a sensible probabilistic framework.In other words, we seem more interested in the probability of something than the value of something.

  • Tim Tyler

    People don’t vote in national elections in order to influence the results. Rather voting is part of a behaviour pattern to do with affiliating with powerful individuals, being part of a team, and being involved in big and important moral and political issues.  It is in the interests of politicians to manipulate their supporters into voting.  The human brain is malleable – so sometimes they succeed.  From this perspective, the link with Pascal’s wager unravels.
     

  • Michael Mouse

    There are many other ways of spending your money. Expectation value alone (even if the calculation is solid, which I don’t think it is in Pascal’s Mugging) isn’t a good guide. Great yields at very long odds are usually bad bets. Consider lotteries in massive rollover weeks: there may well be a positive expectation value, but if you spend all your money on tickets you will almost certainly end up considerably poorer.

    The Kelly criterion gives a limit on how much of your bankroll you should stake on risky prospects if you want to maximise log wealth over time. (Assuming you not remotely loss averse and have infinite time, and also assuming your estimates of probability and payout are correct, i.e. only Knightian risk, no Knightian uncertainty. I wouldn’t advise real humans to bet this riskily.) 

    One interesting feature of the formula is that, as the expected payout gets bigger, the fraction of your wealth you should stake approaches the probability of the payout.

    For a lottery that you win with probability 1 in 100 million, you should stake only up to a hundred millionth of your wealth. Chances are that’s less than the price of a ticket. 

    For Pascal’s mugging (Nick Bostrom’s version), you should stake only up to a 10 quadrillionth of your wealth. Your wallet almost certainly contains more than that, so you should keep hold of it.

  • http://www.facebook.com/people/Jorge-Emilio-Emrys-Landivar/37403083 Jorge Emilio Emrys Landivar

    “while a group of 100 people contemplating the same project, facing a probability ~100*x of achieving the same payoff could be advised to go for it.”
    This is the way humans usually behave.  

  • Bo

    This kind of pascal’s mugging critique could theoretically come up whenever there’s a “this is an important problem and *someone* needs to work on it but the organization asking you to sponsor it is small and probably won’t be able to make much difference, especially with your small contribution”, when the effect of a small contribution is to raise the probability of a big lump benefit that either happens or not (asteroid avoided or not, positive singularity or not, the startup becoming the next facebook or not…), instead of slightly raising the amount of a continous benefit (a few more lifes saved / QALYs created) (even though both donations can yield similar amounts of expected utility).

    But practically I think whenever this comes up, there’s been some kind of communication failure. People are able to see the value in longshot projects like protecting against asteroids and supervirus outbreaks or investing in speculative cancer treatments or ambitious startups even though they have low probabilities of success with big lump benefits in case of success. So I don’t think the fact that it’s a long shot is people’s true rejection when it comes to existential risk. The true rejection might be something more like their suspicion that you just decided to invent some far-fetched bullshit way in which there could be massive benefits in order to bamboozle people with pascal-style arguments, and the fact that you haven’t properly convinced them that your idea is even plausibile and that you’re arguing in good faith. So IMO pascal’s mugging is really almost never relevant and nobody benefits from discussing it.

  • richatd silliker

    There is something illegitimate about this post. The only thing it brings to mind is the chorus from the song “Time of your life” by Green Day.

    “It’s something unpredictable but in the end

    It’s right I hope you’ve had the time of your life”

  • http://www.facebook.com/people/Jorge-Emilio-Emrys-Landivar/37403083 Jorge Emilio Emrys Landivar

    “while a group of 100 people contemplating the same project, facing a probability ~100*x of achieving the same payoff could be advised to go for it. Now thatseems weird to me. ”

    The reason for voting is because this is true.  However you get the 100*x people involved by convincing them all that there is a moral reason to be involved… in short, to solver the collective action problem, you need an institutional solution.

    • http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

       How is coordinating around a putative moral reason an institutional solution?

      • http://www.facebook.com/people/Jorge-Emilio-Emrys-Landivar/37403083 Jorge Emilio Emrys Landivar

        The way people “coordinate around a putative moral reason” is by forming institutions.  In the US, the examples are readily available within the party structures and within the various institutions associated with them. 

  • V_V

    There is a fundamental difference between voting and funding “exitential risk reduction”:

    Chosing whether to vote is a game without a Nash equilibrium: if everybody thinks voting is not worth the effort because that their vote wouldn’t make a difference, then nobody would vote, thus voting would actually make a difference.

    What typical voters actually do is to adopt a cooperative strategy: they use their morality, or their sense of tribal clanship if you prefer, to coordinate with like-minded voters. Voting as a block, they can actually make a difference.

    Exitential risk reduction is a different beast: it’s not a problem of coordinating with other donors vs. freeloading.
    If you think that a particular approach to risk reduction is ill-suited, then it’s not just your small donation that will not make a difference, any reasonable amount of cumulative donations will not make a difference. Thus, you have no ethical reason to coordinate with other donors.

    • John Salvatier

      Voting does have a nash equilibrium, but it’s mixed: some people vote and some people don’t (perhaps people flip a coin to decide).

  • Ben

    The issue I see here, both with the mugging situation and the voting / dedicating one’s life to a cause is that in the above no mention is given to the number of times one will be able to make such a decision. Expected values really only make sense in aggregates; so in the mugging situation Pascal would need to be reasonably certain to be presented with similar choices about a quadrillion times before making the bet would be a good choice to make.
    In dedicating yourself to a cause however, the aggregate does not come from one individual being able to dedicate their life multiple times over, but from many individuals doing so; the last paragraph above aludes to this, but does not accurately characterize the group of potential actors. It is not just a group of 100 people contemplating the same project that are relevant to the calculation, but actually all people on the planet capable of carrying out the same activity who may at some point be exposed to and join the cause ( there is of course a probability of occurrence for this as well ).