42 Comments

Hal Finney - Re: "More complex possibilities get lower probability than simpler ones." That might be a reasonable rule of thumb, but it hardly seems like a universal prior, or the basis for having rational priors all match. People might reasonable disagree, or assuming they do agree, I still don't think its enough. There is no simple and totally adequate measure of complexity of a possibility that I can think of. And if you had one different people who agreed that the more complex one's where less likely, could still have disagreements about how much less likely, each "unit" of extra complexity makes a possibility. Also if you assume this as your criteria, does that mean that equally complex possibilities are always equally likely? That doesn't make a lot of sense to me.

Expand full comment

Wei, it make be clear to you that my assumption "is equivalent to saying that all pre-agents have the same pre-prior", but it is not at all clear to me. I think I could come up with a counter example - can you come up with a proof of this equivalence?

Expand full comment

Ok, I think I know what is going on here. Robin's "pre-rationality" assumption is that each agent's pre-prior, when conditioned on the assignment of priors, equals the prior that was "assigned" to him. This means that the "assignment of priors" in this assumption cannot be the assignment of priors by nature that I talked about earlier in my restatement of the main idea, since the nature-provided priors must contain randomness that wouldn't survive this update process.

Instead, the "assigned" prior (the one that the assumption refers to) must actually be an agent's pre-prior updated by nature's assignment of priors. Only then would updating the pre-prior by this "assigned" prior give you back the "assigned" prior. (I put "assigned" in quotes, because these priors are not actually assigned in the normal sense of the word.)

At this point it becomes clear that Robin's other assumption, that each pre-agent does not think his "assigned" prior has a special origin, is equivalent to saying that all pre-agents have the same pre-prior. After all, if you have a pre-prior different from everyone else, then your "assigned" prior is special, since it is just the updated version of your special pre-prior.

Expand full comment

Wei, the whole point of math is that one's assumptions and conclusion can be described concisely, even if the proof has more detail. I've proven that common priors follow from believing your origins are not special. It may be that such a belief also constrains the pre-priors; I don't know if so or not, but that would be an interesting question to explore.

Expand full comment

In the astronomers example, suppose one astronmer has a pre-prior that assigns a higher probability to the universe being open than the other's pre-prior. Even if they both agree that the nature-provided priors are not special and give no information about the actual world on this issue, wouldn't they still end up with different post-update priors just from the pre-priors being different?

Expand full comment

Wei, your restatement looks fine to me, and no we need not assume pre-agents have the same pre-prior. We need only assume that your pre-prior does not think that your prior had a special origin.

Expand full comment

I think I'm starting to get it. Let me restate the main idea, and someone let me know if I got it right.

Human beings are not born as generic reasoners without any information about the world we live in. Instead evolution has provided us with a prior that is partially optimized for this world. However since evolution is random and unfinished, some aspects of this prior are arbitrary. Robin's idea is that we can remove the randomness and keep only the useful information in the prior by taking the nature-provided prior as the first data point of a generic reasoner with a generic prior (which Robin calls a pre-prior, and which truly has no information about the world), instead of adopting it directly as the prior.

If I've understood it correctly so far, my remaining question is, is it assumed that all of the pre-agents have the same pre-prior? It seems to me that this updating process will remove the arbitrary differences in the nature-provided priors, but differences in the pre-priors continue to be reflected in the post-update priors. Is that correct?

Expand full comment

Nicholas - the problem with your proof is that it assumes what it sets out to prove. The error lies in the claim "each is you, so Robin is x and Robin is y". The point is precisely to deny that Robin(t) and Robin(t+s) are identical, and so x=Robin(t) and y=Robin(t+s) are not identical either.

Now, you might think that the reasons for denying "Robin(t) is identical to Robin(t+s)" are bad ones, but I'd suggest you read Parfit on this. In any event, you're a far cry from establishing the contradiction you claim.

Expand full comment

Wei, preferences are about you, beliefs are about the world. Beliefs should only change when the world or your info about it changes, but not otherwise change when you change. Regarding your second comment, I claim that if you find nature has assigned you a prior which violates a rationality constraint, you should reject it and replace it with a better one.

Expand full comment

I have a couple of late comments. (And a good excuse for being late: the earthquake near Taiwan made parts of the Internet inaccessible for a few days.)

One is that I don't agree with the intuition that apparently inspired this paper. Quoting from it:

"For example, if you learned that your strong conviction that fleas sing was the result of anexperiment, which physically adjusted people’s brains to give them odd beliefs, you mightwell think it irrational to retain that belief (Talbott, 1990). Similarly it might be irrationalto be more optimistic than your sister simply because of a random genetic lottery."

I don't see why it's irrational to be more optimistic than your sister simply because of a random genetic lottery. Consider the analogous argument with preferences. If you learned that your strong taste for bitter foods was the result of an experiment which physically adjusted people's brains to give them odd preferences, you might think it irrational to retain that preference. Does it follow that it's irrational to enjoy being outdoors more than your sister simply because of a random genetic lottery?

The other comment is that I can't figure out the connection between this intuition and its formalization: "This condition in essence requires that each agent’s ordinary prior be obtained by updating his pre-prior on the fact that nature assigned the agents certain particular priors." I've tried to accept Robin's intuition for the purpose of trying to figure out what this sentence means, but so far without success. I can accept that either an agent's prior is assigned randomly by nature, or it's obtained from some pre-prior by updating. But how can both be true at the same time?

Expand full comment

Michael, yes, previously the usual argument given for common priors was that rational beliefs should not vary with arbitrary personal characteristics. And yes, if you are modeling irrational humans, then rational Bayesians may not be what you want.

Eliezer, I mean that in *every* model where an agent has a prior, priors are common knowledge.

Expand full comment

Robin, I'm not sure I understood that comment - did you mean that most Aumannish papers make that assumption? I certainly couldn't write down my own prior, but of course I'm not a Bayesian.

Expand full comment

1. This is a problem that is more basic than Baysianism. If I have the same information as you but disagree with you I cannot really attribute my disagreement to a reason that make my opinion better than yours. But why should rationality determine much?

2. I think that this is of no help in Baysian games. In these cases one has to accept that the agents DO have different information (otherwise one couldn't apply them in the social sciences). Even if there were a coherent argument that there is a unique rational common prior- when one is born(!)- that would not justify the assumption of a common prior in economics.

Expand full comment

Eliezer, FYI, Bayesians always have common knowledge of their priors.

Expand full comment

Nicholas, all of these results are about Bayesian agents in communication with each other - not just communication, but a state of common knowledge. So an Aumannlike result for priors would say, "If you have common knowledge of each other's priors you must have the same prior" or some such. I'm not sure such a thing is true, mind you, but it can be true without implying anything in the way of "all rational agents have a common prior", just as the classic Aumann result shows that Bayesians with common knowledge of each other's beliefs have the same beliefs, but not that "all rational agents have a Unique Rational belief".

FYI: So far as I'm concerned, a probability distribution is a unified mathematical way of *viewing* beliefs of various kinds, high and low anticipations of particular experiences, and so on. A Unique Rational probability distribution would determine, up to freedoms of mere representation, a unique set of beliefs and anticipations with respect to facts and experiences.

Expand full comment

Nicholas, this thread started from my saying "one can think of me today and me yesterday as two different agents. So the weakest coherence one could impose would be coherence for each agent. Coherence for a person, including all the agents associated with that person, is a stronger constraint." Apparently you use the word "agent" differently from me, and refer to a network of definitions and concepts of which I am not familiar. Is there must be another word we could substitute for "agent" in that paragraph that would be acceptable to you?

Expand full comment