A response I often hear to the idea of dedicating one’s life to reducing existential risk, or increasing the likelihood of a friendly artificial general intelligence, is that it represents a form of ‘Pascal’s mugging’, a problem
The issue I see here, both with the mugging situation and the voting / dedicating one's life to a cause is that in the above no mention is given to the number of times one will be able to make such a decision. Expected values really only make sense in aggregates; so in the mugging situation Pascal would need to be reasonably certain to be presented with similar choices about a quadrillion times before making the bet would be a good choice to make.In dedicating yourself to a cause however, the aggregate does not come from one individual being able to dedicate their life multiple times over, but from many individuals doing so; the last paragraph above aludes to this, but does not accurately characterize the group of potential actors. It is not just a group of 100 people contemplating the same project that are relevant to the calculation, but actually all people on the planet capable of carrying out the same activity who may at some point be exposed to and join the cause ( there is of course a probability of occurrence for this as well ).
@dae236742d199236c0defc6f8379edda:disqus Stephen R. Diamond:
Well, I may be able to estimate how much of a wild guess the hypothesis is. I.e. if I am wildly guessing 9 digits number, that's one in a billion chance.
The problem with this is that you obtain an upper bound on probability, and the actual probability can be arbitrarily lower due to parts of the guess that you did not count.
With the pascal's mugging there is other aspect. A charity working on x-risk, or an approach to x-risk, may very easily be worse than just working and giving money to 1 randomly chosen person on this planet, or to random person with PhD in mathematics, or the like. The fact that someone tells you they are the best deal, is not necessarily *any* information that they are the best deal. If you have say 2..3% of psychopaths in society and several percent narcissists as well, and none of those folks want to work, it's clear that the people who can do something are grossly outnumbered by those who either have no moral qualms with telling what ever or have their self assessment hard wired to 'awesome'.
At this point it is not really about probability assessments even but about choosing effective strategies that elicit response. E.g. you can choose a strategy whereby the utility of some action would be positive for the real deal, but negative for the fake. You can require some definitely-non-bullshit achievements in mathematics or computer science. The real deal will have some from the time they were studying, or the time they were happily working on the AI being unaware of the risks. Really cheap to show. But it is not worth it doing such just for the sake of defrauding you - it is easier to e.g. increase reach and defraud the most gullible.
You're gifted with an extraordinarily long life, and you're offered a bet on some grand futuristic hypothesis. How would you rationally decide whether to take it? Or is rationality completely inapplicable to such a decision (given your view that the probability idealization completely breaks down)?
Well, in the coin flip the outcomes that you didn't think about are relatively minor part of it. In things like futurism, one can hardly even claim to have thought properly through a single outcome; it's like throwing an extremely narrow cylinder onto some plane, and expecting it to land on it's end and remain standing; not only it is unlikely that you guessed how cylinder would land, you didn't even consider that the plane may be inclined in which case it won't be stable standing. Except with a zillion such possibilities. Probabilities of correctness of reasoning via 'i can't think of counterargument' fall off exponentially and can be truly mindbogglingly low.
I just want to point out that the issues you bring up occur whenever a human being tries to come up with an explicit probability for anything practical
Most people don't reason about practical choices by coming up with explicit probabilities, the main reason probably being that we're highly unreliable in making such estimates.
The very presence of things you never thought about grossly violates the 'rules of logic and probability theory'.
One possibility I've never thought about in a coin flip is that the coin might disintegrate in midair. But why haven't I reconciled my omission with probability theory merely by including a category: "neither heads nor tails," which includes the coin landing on its edge as well as events like disintegration that I hadn't considered?
This sounds like a complete rejection of probabilistic reasoning. Which is fine. I just want to point out that the issues you bring up occur whenever a human being tries to come up with an explicit probability for anything practical (so obviously you can reason correctly about a six sided dice or a deck of cards)
I don't see what that much as to do with my refutation of mugwumpery's claim that you shouldn't give solely because it encourages other people to mug you...
Besides that, I'm not sure what argument you're making. The whole point of the mugging, as made very clear in Bostrom's dialogue & Baumann's reply, is that the mugger sets his offered reward at *whatever is necessary to overcome the overwhelming evidence*.
I think in part the problem with Pascal's Mugging is incompatibility of how we parse strings with naive expected utility maximization. The probability of the hypothesis that giving this stranger money would save the world, has to jump from 'never considered before' to some defined value.
To get to the core of this part of the issue, the way LessWrong describes it on an article linked from the about page, "We can measure epistemic rationality by comparing the rules of logic and probability theory to the way that a person actually updates their beliefs." . This is rather naive. The very presence of things you never thought about grossly violates the 'rules of logic and probability theory'. The parsing of strings into considerations and thoughts violates it further. To obtain useful information about the world in the end, you have to apply a lot of highly complicated approximate corrections. Especially when it comes to statements parsed from potentially hostile sources.
I do not know how do rationalists picture 'updating' of beliefs. If you want to propagate from some nodes A and B to node C you need to compute cross-correlation between A and B and eliminate any feedback through C. For this you need a lot of information that is usually absent. E.g. if you are 'updating' on the fact that in 10 trials the drug has not killed the patient, you need information on how those trials were picked (10 survivors out of a million, or 10 survivors out of 10). Such information is often not available, yet rationalists claim to update beliefs anyway. Belief propagation is only simple when you are dealing with a tree; when you are dealing with arbitrary graph, it is NP complete, and if you want to be more accurate you need to literally invent better approximate algorithms; replacing some few bits of existing algorithm with idealizations would be definitely harmful, even though in the descriptive sense the algorithm may appear to be less wrong (as fewer bits of it would mismatch the elementary probability theory). And of course the proper way to test something complex is to look and see if it forms better beliefs about not yet revealed parts of the world which are available for measurement.
Voting does have a nash equilibrium, but it's mixed: some people vote and some people don't (perhaps people flip a coin to decide).
The way people "coordinate around a putative moral reason" is by forming institutions. In the US, the examples are readily available within the party structures and within the various institutions associated with them.
Tim Tyler a few down from here, explains the institutions responsible.
Funny example with priors:There is no god and hearing voices means you are crazy with a high percent probability.Talk to any atheist and ask them what they would do if a voice from the sky told them to believe in the God?They will answer that they would think themselves insane and try to get it fixed. This means that with certain collections of priors the human answer to someone who is *trying* to be Bayesian often results in answers that get stuck, because the filtering mechanism perceptions is filters data before it updates.
In order for us to be Bayesian and not get stuck-priors we have to have raw access to the data which we use to update our priors. Unfortunately for most important choices, this is impossible, because our minds always filter and categorize the data we receive.
@dae236742d199236c0defc6f8379edda:disqus Stephen R. Diamond
> But perhaps it isn't surprising that rationalists who are computer programmers by trade would think computer code is the fundamental language of the universe. (On occupation and ideology, see my The practical basis for mass ideologies: Construal-level theory of ideologies meets habit theory of morality – http://tinyurl.com/6uqusqc .
I think you are unfair to programmers. If you ever work on any computational physics of any kind, you know just how extremely unnatural it is to have full rotational symmetry (let alone the phases of MWI!). Likewise this Solomonoff Induction and MWI misunderstanding.
Likewise for poor understanding of the objective properties behind 'probability', which are very clear if you are to do any stochastic modelling on computer, especially at advanced enough level where the convergence as 1/sqrt(n) is not good enough.
I would say that what you write applies more to computer enthusiasts.
That's the cargo cult mentality in a nutshell: focus on the superficial similarities between your runway and the real runway, along with motivated blindness about the differences. How did you think the cargo cults work? They literally can't (or don't want to) understand the crucial difference between their imitation of runway and the real runway. They say, okay we agree that the colour of the runway is a little off but we are working on it. Being entirely oblivious to the whole enormous logistics of actually setting up a real airport.
The logic has been tested to work and found to be useful. It has been found that it doesn't arrive at contradictory conclusions in practice, despite much work to find those.
None of that is true about the reasoning behind the 'estimates' of the risk from rogue AI. It is as distant from the real reasoning as cargo cult's runway is distant from a real airport.
Most obvious test, though, is this: the people behind this whole silly exercise claim superior rationality to most scientists, they claim to see things that scientists miss. The cargo cult runway claims superior headphones. Well let's listen to those headphones. Utter silence - it is just a wooden imitation. They can't sell those headphones for audiophiles. The headphones are for this pseudo runway only.
Likewise, superior rationality (especially epistemic rationality) should result in testable hypotheses about the real world, which can be put to experimental tests. None of that comes out.
@ Stephen R. Diamond
ohh, that's my pet peeve with the advocate you are speaking of and the repeaters club. MWI is not even a valid code in Solomonoff Induction. You only deal with codes whose outputs begin with the data (not contains), and for a very simple reason. Seriously, how hard it can be to think of code that counts from 1 to infinity? It's not even that they are promoting misconceptions, it's that those misconceptions are not even smart.
Bottom line is, proper way of doing induction is why physics is not entirely on board with MWI.
The issue primarily arises from some sort of misunderstanding of the theory. Some rationalists for some reason expect that if they maximize products involving some number they made up, they'll get something good for this (after calling this number a probability).
I think the misunderstanding is driven primarily by a faith in foundationalist epistemology, of the kind Richard Rorty rubbished in "Philosophy and the Mirror of Nature."
But perhaps it isn't surprising that rationalists who are computer programmers by trade would think computer code is the fundamental language of the universe. (On occupation and ideology, see my The practical basis for mass ideologies: Construal-level theory of ideologies meets habit theory of morality – http://tinyurl.com/6uqusqc .