10 Comments

Clearly the problem here is that A does not like B and B does not like A because they are competing for resources. Rationale doesn't play a part in the Rabbi's answers, as he doesn't even stop to think fully and properly before saying 'you're right, you're right', even to his wife. He just takes the easy way out and agrees with them all.

Firstly, not only are they both wrong as a socialised or communist doctrine may hold that "All property is theft" but the wife is also wrong, because both men can be right, and wrong, at the same time.

There's no case to answer here. The Rabbi is only capable of agreeing, and does not have the linguistic tools to articulate the reality of the situation.

Sorry Hal. R.I.P. btw: Thank You for RPOWs!! :)

Expand full comment

pdf23ds - It may seem reasonable to think that way, but really it's not. Consider a simpler case where there is someone so smart that whenever you interact with him, he can convince you of anything. In fact he can convince you that A is true and then turn around and convince you that A is false. Then it might seem that you can reasonably have an expectation that after you interact with him, your opinion on a subject will change in favor of whatever position you know in advance that he will advocate.

But eventually you should realize that since he has this ability, even though he provides a convincing argument in favor of his position, you know that he could provide an equally convincing case for the other side. This knowledge should discredit his argumentation and cause you to reject his position, no matter how persuasive it seems to be on the surface. It's not reasonable to be convinced of something when you know that there is just as compelling an argument against it, even though you don't know what that argument is.

To use the possible-world formulation, if you know that there is information that will cause you to have belief X on a matter, then you know that you exist in a set of possible worlds where belief X is reasonable, and therefore you should have belief X now.

Expand full comment

How does one determine who is expert in any particular field? Seems to me that itself is based on some weighted average, and we're going to have turtles all the way down.

Expand full comment

Daniel, yes, it is well known that one can't just do the same syntactic average of beliefs independent of the subject. See: Genest & Zidek, "Combining Probability Distributions: A Critique and an Annotated Bibliography", Statistical Science 1986. So your model of error needs to guess enough about the process that produces beliefs to estimate which beliefs the errors show up at the errors at other beliefs would be derivative.

I should have mentioned that Carl Shulman is also right, that one needs to think about the correlations between different error sources: http://www.overcomingbias.c...

Expand full comment

On the other hand, I think there's fairly good evidence for the proposition that people actually do apply something like the agreement theorem, that is to say, they move towards what they take to be social consensus, going so far as to repress discrepant information even if it has an empirical basis - and we consider that a cognitive bias!

The problem is that agreement would hold in a rational-expectations world, where errors are subject to random distribution - but they are not. Come to think of it, the answer to the paradox that you can't know that you are the rational one, and therefore you should tack towards consensus, is that you *know* you aren't perfectly rational because you know you are human and subject to cognitive bias - and so is your interlocutor, unless they are HAL9000.

Expand full comment

Robin,

I find your averaging suggestion very intuitively attractive, but I worry that it might be hard to cash it out formally. My worry is easy to explain using a simple, binary belief model, but from what I understand it carries over to more realistic models that use degrees of belief.

Suppose A, B, and C are all equally expert in some domain, and they are consider 3 propositions within this domain, p, p-->q, and ~q. A believes p and ~q, B believes p-->q and ~q, and C believes p and p-->q. If I, as a bystander with no expertise in the domain, decide to go with the majority of experts on these three questions, I end up believing p, p-->q, and ~q, which is of course inconsistent. So, if I try to form binary beliefs based on going with the majority of experts in some domain, I can end up with inconsistent beliefs. Like I said before, my understanding is that one can also end up with probabilistically incoherent degrees of belief if one tries to do a weighted average of the degrees of belief of experts in some domain.

This isn't an original observation. I saw it in a presentation by Christian List, and lots of people who work on "judgment aggregation" are interested in issues like these. I'm not sure what the upshot should be. Clearly the weight of expert opinion about issues should affect our beliefs-if other people have thought about some question and weighed the various arguments and evidence, we'd be fools not to take their beliefs into account in forming our own beliefs about the question. But it's far from clear that the way we should take others' beliefs into account is by taking a weighted average of their degrees of belief.

Expand full comment

Hal, a first cut would be to assume people have the same error rates, and look for a simple average. A second cut would be to weigh people according to their apparent expertize in the topic. It would not make sense to use people's opinion on this topic to estimate their expertize. Once you have such an average of other people's opinion, you would need a very good reason to move your opinion much away from it.

Expand full comment

I don't know that it's necessarily unreasonable for someone to *expect* that a conversation with A will leave them closer to A's position, and the same with B would leave them closer to B's position. There are two main factors here. First, A and B often base their arguments based on different sets of intuitions (e.g. moral intuitions). (The intuitions are the same between them, but they disagree as to which intuitions apply or overrule.) So without knowing beforehand which intuitions can be applied to the situation, one can expect that both sets of intuitions will be persuasive, but not compelling.

The second factor is that one can expect that A's arguments will be flawed, but that one won't be able to immediately see all of the flaws. Talking to B would raise some of those flaws to your awareness (but not all, and also some irrelevant or incorrect objections to A's position). Talking to A again could raise some flaws in B's objections, and so on. Are ideal Bayesian reasoners supposed to be perfectly able to see all logical inconsistencies?

Expand full comment

You're right, you're right.

:)

Your proposal about estimating errors makes sense, but do you mean this as a practical or a theoretical suggestion? It sounds very complicated and difficult to do it in detail, but will a drastically simplified version work? Can you give any examples (perhaps in future blog posts) where you demonstrate the technique with an example of a controversy?

I certainly agree that once you've come up with your best guess at the truth, based on this technique or whatever other hopefully-unbiased method you can find, you should agree with it.

Expand full comment

When you see a distribution of opinion among people, of which they are fully aware, you can conclude they are far from meta-rational. But this does not at all mean you should ignore their opinions and just think for yourself. You instead should estimate an error model, saying what sorts of people are likely to have how much error, and then use that model to estimate a best "middle" summary estimate. You should be very reluctant to disagree with this estimate.

Expand full comment