Tag Archives: Probability

Your existence is informative

Warning: this post is technical.

Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on any given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q. So far you have not taken into account your observation that the planet you are on has life. How do you update on this evidence, to get a posterior probability for Q? Since your model just has a number of planets in it, with none labeled as ‘this planet’, you can’t update directly on ‘there is life on this planet’, by excluding worlds where ‘this planet’ doesn’t have life. And you can’t necessarily treat ‘this’ as an arbitrary planet, since you wouldn’t have seen it if it didn’t have life.

I have an ongoing disagreement with an associate who suggests that you should take ‘this planet has life’ into account by conditioning on ‘there exists a planet with life’. That is,

P(Q|there is life on this planet) = P(Q|there exists a planet with life).

Here I shall explain my disagreement.

Nick Bostrom argues persuasively that much science would be impossible if we treated ‘I observe X’ as ‘someone observes X’. This is basically because in a big world of scientists making measurements, at some point somebody will make most mistaken measurements. So if all you know when you measure the temperature of a solution to be 15 degrees is that you are not in a world where nobody ever measures its temperature to be 15 degrees, this doesn’t tell you much about the temperature.

You can add other apparently irrelevant observations you make at the same time – e.g. that the table is blue chipboard – in order to make your total observations less likely to arise once in a given world (at its limit, this is the suggestion of FNC). However it seems implausible that you should make different inferences from taking a measurement when you can also see a detailed but irrelevant picture at the same time than those you make with limited sensory input. Also the same problem re-emerges if the universe is supposed to be larger. Given that the universe is thought to be very, very large, this is a problem. Not to mention, it seems implausible that the size of the universe should greatly affect probabilistic judgements made about entities which are close to independent from most of the universe.

So I think Bostrom’s case is good. However I’m not completely comfortable arguing from the acceptability of something that we do (science) back to the truth of the principles that justify it. So I’d like to make another case against taking ‘this planet has life’ as equivalent evidence to ‘there exists a planet with life’.

Evidence is what excludes possibilities. Seeing the sun shining is evidence against rain, because it excludes the possible worlds where the sky is grey, which include most of those where it is raining. Seeing a picture of the sun shining is not much evidence against rain, because it excludes worlds where you don’t see such a picture, which are about as likely to be rainy or sunny as those that remain are.

Receiving the evidence ‘there exists a planet with life’ means excluding all worlds where all planets are lifeless, and not excluding any other worlds. At first glance, this must be different from ‘this planet has life’. Take any possible world where some other planet has life, and this planet has no life. ‘There exists a planet with life’ doesn’t exclude that world, while ‘this planet has life’ does. Therefore they are different evidence.

At this point however, note that the planets in the model have no distinguishing characteristics. How do we even decide which planet is ‘this planet’ in another possible world? There needs to be some kind of mapping between planets in each world, saying which planet in world A corresponds to which planet in world B, etc. As far as I can tell, any mapping will do, as long as a given planet in one possible world maps to at most one planet in another possible world. This mapping is basically a definition choice.

So suppose we use a mapping where in every possible world where at least one planet has life, ‘this planet’ corresponds to one of the planets that has life. See the below image.

Which planet is which?

Squares are possible worlds, each with two planets. Pink planets have life, blue do not. Define ‘this planet’ as the circled one in each case. Learning that there is life on this planet is equal to learning that there is life on some planet.

Now learning that there exists a planet with life is the same as learning that this planet has life. Both exclude the far righthand possible world, and none of the other possible worlds. What’s more, since we can change the probability distribution we end up with, just by redefining which planets are ‘the same planet’ across worlds, indexical evidence such as ‘this planet has life’ must be horseshit.

Actually the last paragraph was false. If in every possible world which contains life, you pick one of the planets with life to be ‘this planet’, you can no longer know whether you are on ‘this planet’. From your observations alone, you could be on the other planet, which only has life when both planets do. The one that is not circled in each of the above worlds. Whichever planet you are on, you know that there exists a planet with life. But because there’s some probability of you being on the planet which only rarely has life, you have more information than that. Redefining which planet was which didn’t change that.

Perhaps a different definition of ‘this planet’ would get what my associate wants? The problem with the last was that it no longer necessarily included the planet we are on. So what about we define ‘this planet’ to be the one you are on, plus a life-containing planet in all of the other possible worlds that contain at least one life-containing planet. A strange, half-indexical definition, but why not? One thing remains to be specified – which is ‘this’ planet when you don’t exist? Let’s say it is chosen randomly.

Now is learning that ‘this planet’ has life any different from learning that some planet has life? Yes. Now again there are cases where some planet has life, but it’s not the one you are on. This is because the definition only picks out planets with life across other possible worlds, not this one. In this one, ‘this planet’ refers to the one you are on. If you don’t exist, this planet may not have life. Even if there are other planets that do. So again, ‘this planet has life’ gives more information than ‘there exists a planet with life’.

You either have to accept that someone else might exist when you do not, or you have to define ‘yourself’ as something that always exists, in which case you no longer know whether you are ‘yourself’. Either way, changing definitions doesn’t change the evidence. Observing that you are alive tells you more than learning that ‘someone is alive’.

GD Star Rating
loading...
Tagged as: , , , , ,

Is Confidence Social?

Consider some uses of the word “confident“:

Tom is confident the bus will arrive soon.

This is often interpreted as Tom assigning a high probability to the bus arriving soon. But then what about:

The CDC is confident this diseases poses only a moderate risk.

Is there a high probability that moderate risk is the correct risk assessment? But what can it mean for an estimate to be “correct”? Is this about the robustness of estimate to analysis variations? Now consider:

Sam took me into his confidence.

Perhaps this means Sam assigned a high probability that I would not betray him. But then what about:

Bill’s manner is more confident these days.

Perhaps this means Bill assigns a high probability to his having a high ability.  But this last usage seems to me better interpreted as Bill acting higher status, and expecting his bid for higher status to be accepted by others. Bill does not expect to be challenged in this bid, and beaten down.

If you think about it, this status move interpretation can also make sense of all the other uses above. Sam taking me into his confidence might mean that Sam didn’t expect me to use his trust to reduce his status. And the CDC might expect that its risk estimation could not be successfully challenged by other parties, perhaps in part because this estimate was robust to analysis variations. Similarly, Tom might expect that his status won’t be reduced by the bus failing to show up as he predicted.

Yes, sometimes confidence can be in part about assigning a high probability, or about the robustness of an analysis. But more fundamentally, confidence may be about status moves. It is just that in some circumstances we makes status bids via asserting that some event is high probability, or asserting that variations of an analysis tend to lead to similar results.

If you ever offer advice, to someone who asks you how confident you are in your advice, try to remember that this may at root not be a question about probabilities. It may instead be a question what can happen socially if your advisee follows your advice. How easily might others might challenge that advice, perhaps then lowering your advisee’s status? To figure that out, you may need to look beyond probabilities and analysis robustness, and consider who might want to challenge this advice, what might make them want to launch such a challenge, and what resources they might bring to such a fight.

GD Star Rating
loading...
Tagged as: , ,

Sleeping Beauty’s Assistant

The Sleeping Beauty problem:

Sleeping Beauty goes into an isolated room on Sunday and falls asleep. Monday she awakes, and then sleeps again Monday night. A fair coin is tossed, and if it comes up heads then Monday night Beauty is drugged so that she doesn’t wake again until Wednesday. If the coin comes up tails, then Monday night she is drugged so that she forgets everything that happened Monday – she wakes Tuesday and then sleeps again Tuesday night. When Beauty awakes in the room, she only knows it is either heads and Monday, tails and Monday, or tails and Tuesday. Heads and Tuesday is excluded by assumption. The key question: what probability should Beauty assign to heads when she awakes?

The literature is split: most answer 1/3, but some answer 1/2 (and a few give other answers). Here an interesting variation:

Imagine sleeping beauty has a (perhaps computer-based) assistant. Like Beauty, the assistant’s memory of Monday is erased Monday night, but unlike Beauty, she is not kept asleep on Tuesday, even if the coin comes up heads. So when Beauty is awake her assistant is also awake, and has exactly the same information about the coin as does beauty. But the assistant also has the possibility of waking up to see Beauty asleep, in which case the assistant can conclude that it is definitely heads on Tuesday. The key question: should Beauty’s beliefs differ from her assistant’s?

Since the assistant knows that she might awake to see Beauty asleep, and conclude heads for sure, the fact that the assistant does not see this clearly gives her info. This info should shift her beliefs away from heads, with the assistant’s new belief in heads being less than half. (If she initially assigned an equal chance to waking Monday versus Tuesday, her new belief in heads is one third.) And since when Beauty awakes she seems to have exactly the same info as her assistant, Beauty should also believe less than half.

I can’t be bothered to carefully read the many papers on the Sleeping Beauty problem to see just how original this variation is. Katja tells me it is a variation on an argument of hers, and I believe her. But I’m struck by a similarity to my argument for common priors based on the imagined beliefs of a “pre-agent” who existed before you, uncertain about your future prior:

Each agent is asked to consider the information situation of a “pre-agent” who is not sure which agents will get which priors. Each agent can have a different pre-agent, but each agent’s prior should be consistent with his pre-agent’s “pre-prior,” in the sense that the prior equals the pre-prior conditional on the key piece of information that distinguishes them: which agents actually get which priors. The main result is that an agent can only have a different prior if his pre-agent believed the process that produced his prior was special. (more)

I suggest we generalize these examples to a rationality principle:

The Assistant Principle: Your actual beliefs should match those of some imaginable rational (perhaps computer-based) assistant who lived before you, who will live after you, who would have existed in many other states than you, and who came to learn all you know when you learned it, but was once highly uncertain.

That is, there is something wrong with your beliefs if there is no imaginable assistant who would now have exactly your beliefs and info, but who also would have existed before you, knowing less, and has rational beliefs in all related situations. Your beliefs are supposed to be about the world out there, and only indirectly about you via your information. If your beliefs could only make sense for someone who existed when and where you exist, then they don’t actually make sense.

Added 8a: Several helpful commenters show that my variation is not original – which I consider to be a very good thing. I’m happy to hear that academia has progressed nicely without me! 🙂

GD Star Rating
loading...
Tagged as: ,