Warning: this post is technical. Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on any given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q. So far you have not taken into account your observation that the planet you are on has life. How do you update on this evidence, to get a posterior probability for Q? Since your model just has a number of planets in it, with none labeled as ‘this planet’, you can’t update directly on ‘there is life on this planet’, by excluding worlds where ‘this planet’ doesn’t have life. And you can’t necessarily treat ‘this’ as an arbitrary planet, since you wouldn’t have seen it if it didn’t have life.

Are we talking about the "this universe" vs. "some universe" objection to anthropic fine tuning? For example http://arxiv.org/ftp/arxiv/...

I've always found the "this universe" objection highly unconvincing, because it appears no different than the clearly mistaken "this planet" objection (the "this planet" objection says that an ensemble of planets can not explain why *this* planet is anthropic). For example: http://www.apologeticsinthe...

Thanks Silent Cal, that actually changed my mind (I can be taught!). I now think that the confusion revolves around the fact that Q (which I had been thinking of as an assertion about a universe) is, given Katja's problem statement, really an assertion about a universe-as-experienced-by-an-observer.

So when computing P(Q) via Bayes' formula we need to compute ratios of various kinds of experienced-universes, not ratios of various kinds of universes. In other words, when counting universes we must weight them based on their respective number of observers.

If we assume that the number of intelligent observers in a universe is proportional to the number of planets w/ life, conditioning on "this planet has life" (or "planet 3 has life", or, what I believe to be equivalent, "a randomly-chosen planet has life") effectively weights each universe in just this way.

One can dream up variants of Katja's scenario for which conditioning on "there exists a planet with life" (which effectively weights each universe equally independent of its number of observers) is the right approach, but given Katja's scenario, together w/ the above assumption, I agree it makes sense to condition on a given planet having life.

The stipulation you make in the beginning that "half the worlds are X worlds and half are Y worlds" is what drives this result. I think the artificiality of this assumption is what causes the result to feel wrong at first.

If there are 11 total possible planets that you can be on with equal probability, and 10 of them happen to be in world X, it's pretty intuitive to think you're in world X with probability 10/11 (the .90909...) result you got above.

Of course in real life the distribution of possible worlds and their percentage of life-containing planets is likely much more complicated and currently unknown (unknowable?). I think this is what is driving your visceral since of incongruity.

I changed the post a little to make some bits clearer. Thanks.

Yes, you have priors for everything.

What's wrong with defining entities across possible worlds? I agree that it's bogus to think there is a correct answer about it, but as far as I know there are few constraints on defining stuff. Part of my point is that this is all irrelevant to what evidence you receive when you observe your own existence.

The argument would go through regardless of your choice of priors, so why estimate priors as part of it?

It is quite similar to SIA vs. SSA, except that SSA is only in favor of treating the first information you have about your existence as not evidence - if you learn later that you are human for instance, SSA usually recommends updating on that in the same way as SIA does. Treating 'I observe X about me' as 'X is true of someone' means that you never update except when there is a possible world in which nobody has X. You can get the same result by using SSA with the narrowest possible reference class, changing the reference class to be narrower every time you get new info.

I think I am with you, but in a different language. If your multiverse plays the role of a sample space in the Kolmogorov axioms, then E is a well defined subset of it. That's good. But F="I am on planet 3" is not, because the definition of the sample space does not involve "me".

Now we must expand the sample space beyond the initial multiverse. Some ways of expanding it correspond then to SSA and others to SIA (thank you for the link BTW, it will take me a while to absorb it all.)

Well, "This planet has life" has the information that e.g. planet in a zone with liquid water has life, the information we are using when looking for other planets that may have life. Whereas "Some planet has life" would seem to strip that information unless you put all that you put what you know about 'this' into 'some'. I have a feeling that the argument is entirely about semantics; the 'some' may effectively mean 'this'.

V V: This sounds like a big epistemological problem. Cool.

Does it ameliorate it at all if you can use your own existence as evidence? Granted, you then know that the universe has definitely produced a nonzero amount of life, but if you're trying to learn the probability of Q (as defined in the OP) and you have no sensible way of getting a prior, it seems like knowing that you exist wouldn't help.

I'm not sure I fully understand your objection, but couldn't you use the universal prior?

Short answer: no.

Long answer: the "universal prior" is not so much universal after all:

1) It embedds implicit assumptions on what it is physically computable in the universe.

2) If you exists inside the universe, then you can't use the universal prior, because it is uncomputable within the universe that it is supposed to model.

3) It is defined in terms of strings. Mapping strings to universes, assuming that it is possible, requires an encoding, which adds a layer of arbitrarity.

4) It is defined up to the specification of an universal Turing machine. While asymptotic equivalence theorems do hold, the choice of the specific Turing machine might be relevant for finite strings.

See my comment above (the one that begins "I have an example that I think might clarify things"). Q can be translated into the terms of that comment as "I am in an X world."

P(Q|there is life on this planet) = P(Q|there exists a planet with life)

because I cannot see how "there is life on this planet" ("this planet" denoting the planet I, a life form, am on) sheds further light on the likelihood of Q being true than "there exists a planet with life." That said, I can't prove it, so I think I don't yet fully grok the situation yet.

It would help greatly if someone who DISbelieves the above equation would assume specific values for all relevant quantities (Q's a priori probability, P(life on arb planet | Q), P(life on arb planet |~Q), N, etc), and then explicitly calculate numerical values for the LHS and the RHS of the above equation.

Let's make E be "Planet 3 has life" and F be "I am on planet 3". We observe both E and F. P(X | E) and P(Y | E) are uncontroversially as I calculated. The question, I guess, is how to handle F. If P(F) is taken to be 0.1, then updating on both E and F will be the same as updating just on F (as it clearly should be), and bring you back to P(X) = 0.5, P(Y) = 0.5, essentially because P(F | E, Y) is one and P(F | E, X) is 0.1. To say P(F) = 0.1, though, you need to identify "I" with "a randomly chosen observer from the multiverse" or "a randomly chosen observer from a randomly chosen world" or some such. But note that if we choose "a randomly chosen observer from the multiverse", then the relevant P(X) is the prior probability that "a randomly chosen observer from the multiverse" is in an X world, which is 0.9090, so the calculation ends up with the same result. If we choose "a randomly chosen observer from a randomly chosen world", though, we get P(X | F) = .5. This is precisely the difference between SIA and SSA:

I wonder if there's a good deep reason why inferring "This particular planet has life" gives the SIA answer, while inferring "some planet has life" gives the SSA answer.

You have taken E to mean "planet 3 has life", and get P(E) == 0.55. Opposing philosophers would say that E means "I am on planet 3", in which case P(E) = 0.1 and the update gives you no information.

I am not saying that one philosophy or the other is right, only that the specification you gave doesn't lay the groundwork for doing Bayesian statistics. That's interesting in itself, in my response to Katja I talked vaguely about the need for "priors", but you just gave seemingly complete priors and it still doesn't satisfy me.

I think there is a fundamental metaphysical choice here, and I don't know which way to choose. (P.S. what are SSA and SIA?)

As Robin mentioned in his post where he said he was writing a book, he now has co-bloggers. This post was written by one of his co-bloggers, Katja Grace.

Are you objecting to the fact that I've made a specification of the possible worlds, or to the particular set of worlds I've specified?

It sounds to me like your objection is the former. The thing is, to get any answer at all, you have to have a prior, which is a probability distribution over possible worlds. Now, I spoke in terms of proportions of a set of worlds that all exist instead of in terms of probabilities, but the results will be the same. I find it helps my intuition to imagine that all of the worlds exist, but the conclusions are the same if you change it to "We know that the world has to have ten planets; there is a 50% prior probability that it is an X world, and a 50% prior probability that it is a Y world" and so on.

If you're objecting to my particular specification, it clearly was cooked up to get the result I wanted, but it does demonstrate that observing that one's planet has life can make a difference for at least some priors. Moreover, my intuition says that this would have an effect for most real cosmological priors. I don't have a formal argument for this now, but at the very least, this would mean that the relevant debate should be over what cosmological priors are like.

If you just mean that seeing 9 is meaningless because it's already in the problem definition that a 9 is currently displayed, then you still have to update on the fact that a 9 is currently displayed. Any textbook Bayesian reasoning problem has all of the evidence contained in the problem definition, but you still have to update on it.

The other thing I think you might mean is that we don't even have a prior about who pushed the button how many times, i.e. someone who lived forever and really liked a number is no less likely than any other possibility. This seems odd to me; I'm pretty sure a Bayesian always has a prior, even if it's not a very good one. Also, if this is what you meant, the same point stands if the current number is an observation instead of a part of the problem definition.

V V: I'm not sure I fully understand your objection, but couldn't you use the universal prior? http://wiki.lesswrong.com/w...

Such an a priori estimate is clearly not a practical thing to do, but the fact that it's possible shows how updating on your own existence makes sense.

## Your existence is informative

Are we talking about the "this universe" vs. "some universe" objection to anthropic fine tuning? For example http://arxiv.org/ftp/arxiv/...

I've always found the "this universe" objection highly unconvincing, because it appears no different than the clearly mistaken "this planet" objection (the "this planet" objection says that an ensemble of planets can not explain why *this* planet is anthropic). For example: http://www.apologeticsinthe...

Thanks Silent Cal, that actually changed my mind (I can be taught!). I now think that the confusion revolves around the fact that Q (which I had been thinking of as an assertion about a universe) is, given Katja's problem statement, really an assertion about a universe-as-experienced-by-an-observer.

So when computing P(Q) via Bayes' formula we need to compute ratios of various kinds of experienced-universes, not ratios of various kinds of universes. In other words, when counting universes we must weight them based on their respective number of observers.

If we assume that the number of intelligent observers in a universe is proportional to the number of planets w/ life, conditioning on "this planet has life" (or "planet 3 has life", or, what I believe to be equivalent, "a randomly-chosen planet has life") effectively weights each universe in just this way.

One can dream up variants of Katja's scenario for which conditioning on "there exists a planet with life" (which effectively weights each universe equally independent of its number of observers) is the right approach, but given Katja's scenario, together w/ the above assumption, I agree it makes sense to condition on a given planet having life.

The stipulation you make in the beginning that "half the worlds are X worlds and half are Y worlds" is what drives this result. I think the artificiality of this assumption is what causes the result to feel wrong at first.

If there are 11 total possible planets that you can be on with equal probability, and 10 of them happen to be in world X, it's pretty intuitive to think you're in world X with probability 10/11 (the .90909...) result you got above.

Of course in real life the distribution of possible worlds and their percentage of life-containing planets is likely much more complicated and currently unknown (unknowable?). I think this is what is driving your visceral since of incongruity.

I changed the post a little to make some bits clearer. Thanks.

Yes, you have priors for everything.

What's wrong with defining entities across possible worlds? I agree that it's bogus to think there is a correct answer about it, but as far as I know there are few constraints on defining stuff. Part of my point is that this is all irrelevant to what evidence you receive when you observe your own existence.

The argument would go through regardless of your choice of priors, so why estimate priors as part of it?

It is quite similar to SIA vs. SSA, except that SSA is only in favor of treating the first information you have about your existence as not evidence - if you learn later that you are human for instance, SSA usually recommends updating on that in the same way as SIA does. Treating 'I observe X about me' as 'X is true of someone' means that you never update except when there is a possible world in which nobody has X. You can get the same result by using SSA with the narrowest possible reference class, changing the reference class to be narrower every time you get new info.

I think I am with you, but in a different language. If your multiverse plays the role of a sample space in the Kolmogorov axioms, then E is a well defined subset of it. That's good. But F="I am on planet 3" is not, because the definition of the sample space does not involve "me".

Now we must expand the sample space beyond the initial multiverse. Some ways of expanding it correspond then to SSA and others to SIA (thank you for the link BTW, it will take me a while to absorb it all.)

Well, "This planet has life" has the information that e.g. planet in a zone with liquid water has life, the information we are using when looking for other planets that may have life. Whereas "Some planet has life" would seem to strip that information unless you put all that you put what you know about 'this' into 'some'. I have a feeling that the argument is entirely about semantics; the 'some' may effectively mean 'this'.

V V: This sounds like a big epistemological problem. Cool.

Does it ameliorate it at all if you can use your own existence as evidence? Granted, you then know that the universe has definitely produced a nonzero amount of life, but if you're trying to learn the probability of Q (as defined in the OP) and you have no sensible way of getting a prior, it seems like knowing that you exist wouldn't help.

@82d99766d5d827acb8915675200c7906:disqus

I'm not sure I fully understand your objection, but couldn't you use the universal prior?

Short answer: no.

Long answer: the "universal prior" is not so much universal after all:

1) It embedds implicit assumptions on what it is physically computable in the universe.

2) If you exists inside the universe, then you can't use the universal prior, because it is uncomputable within the universe that it is supposed to model.

3) It is defined in terms of strings. Mapping strings to universes, assuming that it is possible, requires an encoding, which adds a layer of arbitrarity.

4) It is defined up to the specification of an universal Turing machine. While asymptotic equivalence theorems do hold, the choice of the specific Turing machine might be relevant for finite strings.

See my comment above (the one that begins "I have an example that I think might clarify things"). Q can be translated into the terms of that comment as "I am in an X world."

I tentatively believe

P(Q|there is life on this planet) = P(Q|there exists a planet with life)

because I cannot see how "there is life on this planet" ("this planet" denoting the planet I, a life form, am on) sheds further light on the likelihood of Q being true than "there exists a planet with life." That said, I can't prove it, so I think I don't yet fully grok the situation yet.

It would help greatly if someone who DISbelieves the above equation would assume specific values for all relevant quantities (Q's a priori probability, P(life on arb planet | Q), P(life on arb planet |~Q), N, etc), and then explicitly calculate numerical values for the LHS and the RHS of the above equation.

Good point.

Let's make E be "Planet 3 has life" and F be "I am on planet 3". We observe both E and F. P(X | E) and P(Y | E) are uncontroversially as I calculated. The question, I guess, is how to handle F. If P(F) is taken to be 0.1, then updating on both E and F will be the same as updating just on F (as it clearly should be), and bring you back to P(X) = 0.5, P(Y) = 0.5, essentially because P(F | E, Y) is one and P(F | E, X) is 0.1. To say P(F) = 0.1, though, you need to identify "I" with "a randomly chosen observer from the multiverse" or "a randomly chosen observer from a randomly chosen world" or some such. But note that if we choose "a randomly chosen observer from the multiverse", then the relevant P(X) is the prior probability that "a randomly chosen observer from the multiverse" is in an X world, which is 0.9090, so the calculation ends up with the same result. If we choose "a randomly chosen observer from a randomly chosen world", though, we get P(X | F) = .5. This is precisely the difference between SIA and SSA:

http://meteuphoric.wordpres...

I wonder if there's a good deep reason why inferring "This particular planet has life" gives the SIA answer, while inferring "some planet has life" gives the SSA answer.

You have taken E to mean "planet 3 has life", and get P(E) == 0.55. Opposing philosophers would say that E means "I am on planet 3", in which case P(E) = 0.1 and the update gives you no information.

I am not saying that one philosophy or the other is right, only that the specification you gave doesn't lay the groundwork for doing Bayesian statistics. That's interesting in itself, in my response to Katja I talked vaguely about the need for "priors", but you just gave seemingly complete priors and it still doesn't satisfy me.

I think there is a fundamental metaphysical choice here, and I don't know which way to choose. (P.S. what are SSA and SIA?)

As Robin mentioned in his post where he said he was writing a book, he now has co-bloggers. This post was written by one of his co-bloggers, Katja Grace.

Are you objecting to the fact that I've made a specification of the possible worlds, or to the particular set of worlds I've specified?

It sounds to me like your objection is the former. The thing is, to get any answer at all, you have to have a prior, which is a probability distribution over possible worlds. Now, I spoke in terms of proportions of a set of worlds that all exist instead of in terms of probabilities, but the results will be the same. I find it helps my intuition to imagine that all of the worlds exist, but the conclusions are the same if you change it to "We know that the world has to have ten planets; there is a 50% prior probability that it is an X world, and a 50% prior probability that it is a Y world" and so on.

If you're objecting to my particular specification, it clearly was cooked up to get the result I wanted, but it does demonstrate that observing that one's planet has life can make a difference for at least some priors. Moreover, my intuition says that this would have an effect for most real cosmological priors. I don't have a formal argument for this now, but at the very least, this would mean that the relevant debate should be over what cosmological priors are like.

Orphan:

If you just mean that seeing 9 is meaningless because it's already in the problem definition that a 9 is currently displayed, then you still have to update on the fact that a 9 is currently displayed. Any textbook Bayesian reasoning problem has all of the evidence contained in the problem definition, but you still have to update on it.

The other thing I think you might mean is that we don't even have a prior about who pushed the button how many times, i.e. someone who lived forever and really liked a number is no less likely than any other possibility. This seems odd to me; I'm pretty sure a Bayesian always has a prior, even if it's not a very good one. Also, if this is what you meant, the same point stands if the current number is an observation instead of a part of the problem definition.

V V: I'm not sure I fully understand your objection, but couldn't you use the universal prior? http://wiki.lesswrong.com/w...

Such an a priori estimate is clearly not a practical thing to do, but the fact that it's possible shows how updating on your own existence makes sense.