Sleeping Beauty’s Assistant

The Sleeping Beauty problem:

Sleeping Beauty goes into an isolated room on Sunday and falls asleep. Monday she awakes, and then sleeps again Monday night. A fair coin is tossed, and if it comes up heads then Monday night Beauty is drugged so that she doesn’t wake again until Wednesday. If the coin comes up tails, then Monday night she is drugged so that she forgets everything that happened Monday – she wakes Tuesday and then sleeps again Tuesday night. When Beauty awakes in the room, she only knows it is either heads and Monday, tails and Monday, or tails and Tuesday. Heads and Tuesday is excluded by assumption. The key question: what probability should Beauty assign to heads when she awakes?

The literature is split: most answer 1/3, but some answer 1/2 (and a few give other answers). Here an interesting variation:

Imagine sleeping beauty has a (perhaps computer-based) assistant. Like Beauty, the assistant’s memory of Monday is erased Monday night, but unlike Beauty, she is not kept asleep on Tuesday, even if the coin comes up heads. So when Beauty is awake her assistant is also awake, and has exactly the same information about the coin as does beauty. But the assistant also has the possibility of waking up to see Beauty asleep, in which case the assistant can conclude that it is definitely heads on Tuesday. The key question: should Beauty’s beliefs differ from her assistant’s?

Since the assistant knows that she might awake to see Beauty asleep, and conclude heads for sure, the fact that the assistant does not see this clearly gives her info. This info should shift her beliefs away from heads, with the assistant’s new belief in heads being less than half. (If she initially assigned an equal chance to waking Monday versus Tuesday, her new belief in heads is one third.) And since when Beauty awakes she seems to have exactly the same info as her assistant, Beauty should also believe less than half.

I can’t be bothered to carefully read the many papers on the Sleeping Beauty problem to see just how original this variation is. Katja tells me it is a variation on an argument of hers, and I believe her. But I’m struck by a similarity to my argument for common priors based on the imagined beliefs of a “pre-agent” who existed before you, uncertain about your future prior:

Each agent is asked to consider the information situation of a “pre-agent” who is not sure which agents will get which priors. Each agent can have a different pre-agent, but each agent’s prior should be consistent with his pre-agent’s “pre-prior,” in the sense that the prior equals the pre-prior conditional on the key piece of information that distinguishes them: which agents actually get which priors. The main result is that an agent can only have a different prior if his pre-agent believed the process that produced his prior was special. (more)

I suggest we generalize these examples to a rationality principle:

The Assistant Principle: Your actual beliefs should match those of some imaginable rational (perhaps computer-based) assistant who lived before you, who will live after you, who would have existed in many other states than you, and who came to learn all you know when you learned it, but was once highly uncertain.

That is, there is something wrong with your beliefs if there is no imaginable assistant who would now have exactly your beliefs and info, but who also would have existed before you, knowing less, and has rational beliefs in all related situations. Your beliefs are supposed to be about the world out there, and only indirectly about you via your information. If your beliefs could only make sense for someone who existed when and where you exist, then they don’t actually make sense.

Added 8a: Several helpful commenters show that my variation is not original – which I consider to be a very good thing. I’m happy to hear that academia has progressed nicely without me! 🙂

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Daniel

    I’m pretty sure I’ve heard this argument floated around, but I can’t remember which paper I’ve seen it in. Cian Dorr uses a structurally similar example to make a similar point on pp. 293-4 of this paper. More generally, examples that make implicit use of principles like your Assistant Principle are common in the literature on observation selection effects. For instance, Jonathan Weisberg criticizes Elliott Sober’s views on these issues by (effectively) arguing that Sober is committed to violations of something like your assistant principle. The argument I have in mind is offered in section 3 of this paper.

  • http://www.cs.utoronto.ca/~radford Radford Neal

    You can find this argument in my paper on Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning (there you can get both the original 2006 version and a partially revised 2007 version). Sleeping Beauty is discussed in Section 3, the argument involving an “assistant” is in Section 3.3, on “Beauty and the Prince”. I think a definitive argument for the 1/3 answer is found in Section 3.4, using what I call the “Sailor’s Child Problem”.

    For the general use of “assistants” (whom I call “companions”), see Section 2.4.

    • http://hanson.gmu.edu Robin Hanson

      You are right; see my added to the post.

    • http://infiniteinjury.org Peter Gerdes

      Interesting paper but I’d argue that there was never any real paradox to begin with and the whole mess is just what happens when you try to argue over a concept like probability without really coming to grips with what it means. In particular we’ve let the fact that several different notions make use of the same mathematical machinery trick us into thinking there is some SINGLE PHILOSOPHICAL notion of a person’s (rational/rationally updated) probability.

      I mean all the math does in any of these theories is add up numbers the way we specify. Saying two events are independent has no formal meaning besides telling you to generate the function on the cross product of the spaces by multiplying the functions you have on the individual spaces. The math can’t cause or resolve any paradox all it does is count things up.

      So back to sleeping beauty. Why should we ever suppose that there is one philosophical concept of probability in the first place. So what if we can use probability theory to compute how she should bet and think it can provide a good guide to what she should accept as true. Worst case scenario is that sleeping beauty shows that these two notions can diverge (because our philosophical principles tell us to count different things to get those results).

      Even worse if we don’t know if we even have a single philosophical concept and are deeply confused about what it might even mean to say something has a given probability in the truth value type sense, e.g., limiting frequency arguments, how the hell did we come to assume there is some coherent thing called our a priori probability that we assign before experience (whatever that means).

      In short my answer is probability is nothing but a way of counting things. Based on what we want to do we may count them differently. Period end of story. The idea of the probability of an event full stop simply isn’t a notion that’s meaningful.

    • http://infiniteinjury.org Peter Gerdes

      To put the point differently all I see is that people have given two ways of adding up some numbers and shown one corresponds to the odds you should take when betting in certain circumstances. The burden is on someone who thinks there is any problem to produce that problem.

      Until someone can tell me what they mean by probability (besides just avoiding dutch books) and give me a reason to believe it’s a coherent notion but nevertheless seems to result in contradiction or an absurd result I don’t even see a paradox to be answered.

      So unless someone actually explains what they mean by probability and then makes a convincing case for paradox in this area what answer can you give but shurg. I mean it’s like if someone said ‘2+2=4 but 2+3=5’ how can we resolve the paradox. You might diagnose their confusion but there would be nothing to resolve.

  • Jonathan Colvin

    Robin: You didn’t note whether you are a halfer or a thirder! (I’m a halfer btw).

    w.t. Sober he does get surprisingly tangled in his selection arguments. Sober later accepted that Weisberg is correct, but then argued that there is a further selection effect. But now I can’t find the reference.

  • Matt Knowles

    In the quoted statement of the problem, I don’t see why it would be “heads and Monday”. The coin isn’t tossed until Monday night…

    I would restate the possibilities as either: 1) tails and Tuesday; 2) tails and Wednesday; or 3) heads and Wednesday.

    Assuming you’re asking what probability she should assign to heads when waking on Tuesday or Wednesday, then I don’t see how it would be possible to be anything but 1 in 3. There are three possible ways she can wake up, and only one of those is heads…

    And I don’t see how being drugged to forget everything that happened Tuesday is anything but a distraction. It shouldn’t affect her assessment of the odds, should it?

  • http://thecandidefund.wordpress.com/ dirk

    I disagree with The Assistant Principle on the basis that Sleeping Beauty’s Assistant has more information than Sleeping Beauty. Why shouldn’t more information lead to different beliefs?

    As for SB, she should give 1/2 odds to heads Monday because the odds of flipping heads are 1/2. If the outcome is tails then there is a (1/2)*(1/2) chance it is Monday tails and (1/2)*(1/2) chance it is Tuesday tails. What laws of probability are you using to calculate otherwise?

    The Assistant, on the other hand, is like the lucky contestant in the Monte Hall problem who gets to see behind one of the doors that doesn’t contain the prize and therefore adjusts his odds calculation based upon more information. Your argument is the equivalent of saying that a contestant on Monte Hall should change his calculation of the odds as if he had more information that he really does.

    So it is logical for SB to put the odds at 1/2 while it is logical for her assistant to put them at 1/3.

    • http://thecandidefund.wordpress.com/ dirk

      OK, I’ve changed my mind. You are correct, sir. What I left out of my calculation was that SB knows what situation does not exist, so she can discard that possibility like a card she saw burned from the deck.

      • Jonathan Colvin

        Don’t follow. What situation does SB know does not exist?

  • Jeremy

    I believe the methods to arrive at both 1/3 and 1/2 are fundamentally flawed, in that they assume that tails-Monday and tails-Tuesday are separate events. If the coin shows tails, then both tails-Monday and tails-Tuesday will occur with probability 1. So Beauty only has two possible outcomes to consider: [heads-Monday] or [tails-Monday AND tails-Tuesday]. The answer then has to be 1/2, but not for the reasons the 1/2ers put forward.

  • Jonathan Colvin

    How does sleeping beauty have the same info as the assistant? The assistant knows that it could have observed beauty asleep, whereas for beauty this is impossible. The assistant thus has information that beauty is lacking.

  • JeffJo

    Robin’s comparison to the assistant is a valid explanation for why the probability is 1/3. But in my opinion, it fails to convince because it doesn’t isolate the reason why 1/2 is wrong.

    The root cause of the controversy is confusing the occurrence of an outcome with the observation of that outcome. Heads and Tuesday is *not* excluded from occurring, it is just not observed by Sleeping Beauty when it does. The event itself still happens. A random time during this experiment has a 1/4 chance to be any of the four combinations Heads and Monday, Tails and Monday, Heads and Tuesday, or Tails and Tuesday. What Sleeping Beauty knows, that changes these probabilities, is that she won’t observe Heads and Tuesday, not that it won’t happen.

    The answer to the question is now quite simple:

    Pr(Heads|Observe) = Pr(Heads and Observe)/Pr(Observe) = (1/4)/(3/4) = 1/3.

    Here’s a better version of the Assistant, that makes this clearer. I thought it up independently, and found this site when researching where I should go with it:

    Rip van Winkle is given a bedroom on the other side of the building from Sleeping Beauty’s. He is never given the stay-asleep drug, but is given the amnesia drug Monday night. He has no contact with Sleeping Beauty, and no way to tell what day it is. But he gets asked the same question, about the same coin flip, on Monday. On Tuesday, he is asked the question again if the flip was tails. If it was heads, he is released from the experiment without being asked the question.

    The only difference in the problems, is that where Sleeping Beauty cannot observe a time on Tuesday at all, Rip van Winkle can observe it and distinguish it from the other three possibilities. But if he observes that the question is asked, Rip’s information is identical to Beauty’s. And his answer is trivially 1/3, by the method I used above.

  • Robert

    The key question: what kind of dystopian society is this that doesn’t have calendar watches?