15 Comments

My mistake - I thought the Bayesian view was agnostic as to where priors came from. In fact, I had in mind an example of yours -- from 'Are Disagreements Honest?' -- where John and Mary have different impressions of the make, color and age of a car fleetingly glimpsed. Would the actors' estimates of the probability, that the car was, say, more than three years old, on the basis of what they (thought they had) had seen, not be valid priors?

Expand full comment

Priors don't come from experience.

Expand full comment

Do not the priors arising from first-person experiences have special origins? We can tell other people about our how our experiences seemed to us, but they will only gain an approximate sense of what they were like for us, that is colored by their own prior experiences (just as our own prior experiences color our own experience of our sensory inputs.) Even if we believe that our universe is strictly deterministic (must all Bayesians agree on this?) the causal processes by which sensory inputs become models of the world are presently opaque to us.

This is, of course, one reason why anecdotal evidence is frowned upon, yet such evidence (in the form of eyewitness accounts) is often central to legal proceedings, a field where rationality is supposed to be upheld. Furthermore, eyewitness evidence is often used in criminal investigations to assert very concrete things, such as whether a suspect could possibly have caused certain events.

Economists may think they can avoid the problem by hewing exclusively to quantitative data, but first-person experience may have even more influence on one's concept of utility than of probability, especially as the question 'what do people want?' looms large in the field.

Expand full comment

Suppose I create an agent, and give it totally mad priors. Because of how I created it, this agent doesn't consider changing its priors. It's a pure first order Bayesian prior updater. You can see the agent being obviously stupid, do you want to change your priors to that? To really prove a point, I could make 2 agents with different mad priors, and watch them disagree.

Sure, you have a good reason why your priors are sane, evolution. But there are many mad priors that are also self justifying.If my priors had been randomly changed, from one self justifying state to another.

Suppose that last week your priors put 100% on god existing, and justified that by saying god would make humans with good priors. Your priors were modified to your current priors, would you want to change back?

Expand full comment

Yes once you accept that rational agents would not knowingly disagree, you are drawn to conclude that humans don't actually disagree so much as they pretend.

Expand full comment

So the upshot is, differing origins of priors are not a reasonable solution to the puzzle of disagreement, right?

I think I know what *is* a reasonable solution (and a very Hansonian one, in fact): it's the utility thing. We have roughly common priors and uncommon utility functions, but pretend to have uncommon priors and a common utility function in order to persuade others to update towards our preferred outcomes. (although I think this dishonesty leaves us uncertain about others' true priors as well).

Or put another way, the conscious part of Homo Hypocritus has a prosocial utility function, a highly idiosyncratic prior, and either irrational updating rules or a hard-to-justify belief that its priors are special. The unconscious part has a selfish utility function, a more common prior, updates rationally, and has no especially unlikely beliefs.

Expand full comment

So... In addition to debates on facts and debates on priors, we can also have debates on origins of priors?

"Your priors are due to racism!"

"Your priors are due to Gramscian damage!"

etc. etc.

Expand full comment

Okay, but many people are far from sure that they "want to generalize classical propositional logic to assign a degree of certainty to every proposition, and … retain certain properties of propositional logic,"

Expand full comment

"Some people simply declare that differing beliefs should only result from differing information, but others are not persuaded by this."

Suppose that ALL the information the agents have is in logical form: a set of propositions S. Given a proposition A, what are legitimate possibilities for P(A | S)? The following paper shows that there is only one possibility. Hence, different people with the same information MUST have the same probabilities if that information is ALL in logical form.

Specifically, the paper shows that if you want to generalize classical propositional logic to assign a degree of certainty to every proposition, and you want to retain certain properties of propositional logic, then you unavoidably end up with probability theory as your extended logic; and furthermore,

P(A|S) = #(A & S) / #(S)

where #(B) is the number of truth assignments satisfying proposition B.

"From propositional logic to plausible reasoning: A uniqueness theorem"https://www.sciencedirect.c...orhttps://arxiv.org/abs/1706....

Expand full comment

I'm making no claims about now vs then. My work is overall appreciated, but the world and I disagree on which of my work is how important.

Expand full comment

OK good. But was that an admission that your work (as judged by others) is better now, better understood now, or just inconsistently appreciated?

And I hate to add this but as a AM radio quote long time listener first time caller this system is unbelievably annoying to navigate.

Expand full comment

I made sure to imply that other work of mine is valued, by mentioning my total cites.

Expand full comment

I'm just worried about the signalling effect of saying people don't value your early work.

Expand full comment

The robust generalization of common knowledge is common belief. The allowed degree of disagreement goes linearly as the degree of common belief moves away from one.

Expand full comment

Non-economist here. I feel like I'm making one of those stupidly obvious critiques that outsiders think are clever and make insiders groan, but I can't see what's wrong with it.

Is a common knowledge assumption considered "reasonable"? It seems both unrealistic and highly unstable, as in it will not produce approximately the same results when relaxed slightly. If I'm less than totally sure you know what I know, theorems collapse like collateralized debt obligations.

I know your name is Robin, and I'm telling you that I know your name is Robin, but we do not and never will have common knowledge that your name is Robin. See, it's not clear to me that you know that I know that you know that I know your name is Robin, and if I myself am not sure how could you be sure I know that?

Neither of us need to be irrational to not agree. We just need to not be absolutely certain that the other agent has symmetric knowledge of the other's rationality.

Expand full comment