52 Comments

Probably late to the party, but doesn't it (anthropic doomsday argument) then reduce to the trivial observation that the later along a finite process you are, the closer you are to its ending? I mean, you *have* to be closer to the final generation of humans than your ancestors simply because *you* exist thus their generation could not have been final.

Expand full comment

Shulman and Bostrom consider and largely dismiss this conclusion because, taken all the way, the SIA argument implies we live in a simulation. Or something. Anyone know why us being in a simulation means AI should be doable here?

http://www.nickbostrom.com/...

(page 11, footnote)

Expand full comment

Re: "After at first thinking you are in a random box"

Why on earth would you think you are in a "random" box?!?

If I rearranged the diagram so the 3 categories now corresponded to totally different time periods - would you *still* think that you are in a random box?

Drawing arbitrarily-labelled boxes and assigning them equal probability does not seem to be a sensible way to generate priors on this topic.

Expand full comment

My diagnosis of the situation is that these all arise out of a fundamental misunderstanding of probability as confidence in something's truth.

I mean the problem is that there is no random process that spits you out into society with uniform probability of being any baby (or wakes you up with uniform probability of being in any day of the sleeping beauty paradox). These paradoxes are seductive because [B]we assume that since we can't say anything in favor of us being born now rather than then we can assign them equal probability.[/B] Really though the notion of probability doesn't even really make sense here.

To take apart this paradox let's me first show why we can't have shown that the probability of being near the end of civilization is large. Suppose I'm god and I can just magic up these worlds and 999,999 times out of a million I set up the worlds to continue indefinitely with ever expanding population. If I magic up a whole bunch of these worlds and assign souls randomly to such worlds then it turns out that the probability of being in the last generation is very very low.

Thus since this situation is wholly consistent with what we've observed so far we can't actually have the information necessary to conclude that the probability of the end of days coming soon is high. The error was taking our lack of any reason to distingush being born as any given baby in our universe as justification for supposing that the world is fixed but who you are in it is chosen via a uniform distribution from all the people.

Probability isn't magic and it shouldn't be used interchangeably with confidence. It's just a fancy way of counting things.

Expand full comment

Actually a version this argument can be turned around to show that if your prior probability of living in a universe that repeats ad infinitum (e.g. the universe is stuck in a big infinite loop) is non-zero then you should conclude that in fact you do live in such a universe with probability 1. Indeed, I think this can actually be seen as a fairly serious problem for some kinds of Bayesian notions of science to handle.

First just consider the sleeping beauty problem. You enroll in some weird experiment and are told that you will be put to sleep then a coin will be fliped if the coin is heads then you are woken up the next day informed the experiment isn't over yet and then put back to sleep after which your memory of waking up is wiped. If tails you aren't woken up tomorrow at all. In either case you are woken up the day after tomorrow and put back to sleep until being released 3 days from now. Now when you are woken up during the experiment should you assign probability 1/2 or probability 2/3 to the coin coming up heads. By a similar argument to above there are strong reasons to say it is 2/3 (certainly you should act as if it is 2/3 and not make even odds bets on the coin being tails since you could lose 2 dollars on a heads and win 1 on tails).

Now apply the same argument to the scenario I gave above. Do the math and you find that merely given the fact that you currently exist you should boost the probability that there are infinitely many copies of yourself (in time/space/whatever) to 1.

Expand full comment

While obviously not wanting to fail my "super clicker" grade, I would place the chance of one principle solving all those issues at close to zero.

Expand full comment

Deploying the now famous Geddes transhuman intuition, I'm still confident the SIA-principle does in fact cancel the doomsday argument. Also, I think your argument for time-time-asymmetry was vaguely along the right lines- I belive the SAI-principle can be generalized somehow, and that generalization is what kills all four of the following birds with one stone:

(a) Anthropic puzzles (e.g doomsday argument)(b) Born probabilities(c) Puzzles of weighting subjective experiences(d) Time asymmetry

(a), (b) (c) and (d) are all related, and I'm sure the key is some big generalization of SIA which has so far been missed. I leave pesky near-mode details to you folks.

(Hint: Categoization, analogical inference, and reference classes are the key, not Bayes, and similarity is the important measure, not probability. It is simply obvious that categorization is the key is solving these puzzles and it is more powerful than induction. Any LW/OB participant who has failed to spot these blinking obvious points is not a 'super clicker' I'm afraid).

Expand full comment

Re: "you have no way to assign any probabilities of anything relevant to the inquiry"

Probabilities are measures of uncertainty; people would be well advised to attach them to all their beliefs.

Expand full comment

None of those seem very likely.

Expand full comment

seriously, how can you attempt to conclude anything from something you expect but do not observe, without knowing whether the something is actually there (despite your failure to observe). it's like someone in the 13th century basing a theory on the absence of a new world (or the fact that the earth is not a sphere, or is the center of the universe), or someone in the 20th century basing a theory on the absence of planets outside this solar system, etc.

put another way, as others have noted, you have no way to assign any probabilities of anything relevant to the inquiry. aren't the odds best we are in a simulation? how do you figure the odds we're not the bacteria in a petrie dish wondering why the universe seems to be finite, lack other bacteria, etc.? how do you figure the odds it's not god hiding things from us? how do you figure the odds that all advanced civilizations move almost immediately to communicating via gravity waves or somesuch that we can't detect? how do you figure the odds we're not in a zoo, or quarantined?

Expand full comment

Frankly Robin, I'm disappointed.

(1) Maybe nearly all advanced civilizations don't care to colonize their galaxies -- maybe they spend all their time in pleasant simulated worlds (think holodeck, or the matrix).

(2) Maybe we're in such a backwater of the galaxy that the galactic empire doesn't care to come here.

(3) Maybe we're in an "protected ecological reserve" of the galactic empire.

Expand full comment

That they are not exploiting the energy gradients that fuel our civilization is pretty powerful evidence that they are not here, IMO. To find viable scenarios, one has to hypothesize that they are actively hiding. There are some possible scenarios there - but they don't look terribly probable.

Expand full comment

You're randomizing across time instead of across birth order. The anthropic doomsday argument works by birth order; and if you're truly picking people at random by birth order from the first homo sapiens sapiens 'til now, they're pretty likely to have been born reasonably recently.

Expand full comment

My post countering this argument can be found at:

http://lesswrong.com/lw/1zj...

Expand full comment

No, Robin, you are right. I thought about the argument a bit more on my way home and realized I has mistaken.

Expand full comment

This doesn't seem right to me, at least in the simple versions I've thought of. Can you make your argument via box diagrams like Katja did?

Expand full comment