26 Comments

Nick, it seems the issue we need to consider more to further explore this topic is the appropriateness of Bayesian-like analysis. But as it is a framework intended to account for a wide range of issues in inference, we should judge it overall in terms of all of the intuitions it may or may not conflict with, relative to other possible frameworks of analysis. Since we expect some of our intuitions to be in error, finding a few conflicts with intuition should not discourage us from embracing Bayesian analysis. I invite you to take the first shot by posting sometime on what you see as the most serious problems with the Bayesian approach.

Expand full comment

I see this is in effect being further addressed in new posts, so I’ll just answer the last two points and then leave it.Eliezer (apologies for mis-spelling your name in the last post): The answer to your challenge is in the original post, where I offered precisely such an argument. Secondly, I think you are mistaken about where the burden of proof lies. No one is disputing that sometimes there are unreasonable disagreements, which is all that your examples go to show. But you are simply assuming that Bayesianism is true. My point is that it is no less reasonable, and perhaps more reasonable, to start from the premiss that people do reasonably disagree (indeed, some would argue that we are morally required to accept that premiss), and if Bayesianism conflicts with that, so much the worse for Bayesianism.Robin: It is not that I am uninterested or unsympathetic to the formal results, but I am bringing into view ways in which it might be argued that the formal model seems to give the wrong answer. Guy brought out the point about idealisation at the end of enquiry versus our situation. There is a lot to be discussed about whether and when idealisation is a clarification rather than obscuration of philosophical issues. I have been pressing on a different point, namely the the requirement that reasons for a belief should have content that is relevant to the truth of the content of the belief. I have drawn your attention to a specific argument about endurance versus perdurance which turns on the problem of temporary intrinsics. Attending to that argument shows why Lewis thinks temporary intrinsics means perdurance is true whilst Van Inwagen does not, and the reasons that Lewis adduces are other metaphysical doctrines, in particular, doctrines about what it is to be an intrinsic property. The belief in their disagreement has no content that bears. It is quite irrelevant. If you add it as a premiss to either of their arguments it sits as an idle cog. It can neither justify nor defeat any of the reasons they adduce in this dispute. So I have given an example of the way in which disagreement, at least prima facie, has no rational significance, and your reply is to say, well yes, but in my formal model with impossible possible worlds it does. Fine, say I, so much the worse for your model!

Expand full comment

Nick S, I presume we agree that there is a true answer to the question of the "persistence of identity through time," as otherwise there is no point in disagreeing about that topic. Even if you thought that this claim was either true in all possible worlds, or false in all possible worlds, in the usual sense of a self-consistent "possible world," we can invoke the concept of an "impossible possible world" which need not be self-consistent.

Using this concept of an impossible possible world, we can formally express the idea that as you proceed with error-prone analysis, you may mistakenly believe a claim that is not consistent, but that as you conduct more analysis you will make fewer such mistakes. In such a framework the fact that someone else proceeding similarly disagrees with you is a powerful relevant clue about the errors in your analysis, just as ordinary disagreement is a powerful clue about ordinary inference.

Expand full comment

Nicholas, just to amplify on that, what I'm challenging you to do is justify the step from "philosophical disagreement is a fact of our lives" to "reasonable philosophical disagreement is a fact of our lives". That's the key step, and you can't just say you observed it - one wishes to know what justification for the term "reasonable" is strong enough to override normative Bayes. The fact that it *feels* very reasonable to you is not strong enough to override Bayes, because conjunction fallacies, and availability biases, and nine-tenths of the known fallacies in the lexicon, *feel* reasonable.

Expand full comment

Heh. Nicholas, whilst I can see your inexorable chain of logic, one much wishes to know what possible desideratum - nay, not even the word of God spoken from out of the clouds - could possibly support "2. People do reasonably disagree" when no less a desideratum than NORMATIVE BAYESIANISM says otherwise.

Expand full comment

Eleizer: the argument would be1. Normative Bayesianism implies that people cannot reasonably disagree.2. People do reasonably disagree.3. What is actual is possible.4. Therefore people can reasonably disagree. (2, 3)5. Therefore normative Bayesianism is false. (1, 4 modus tollens)

Robin: of course I grant that *any* kind of disagreement (not just ordinary ones) *might* be unreasonable. That’s not the issue. The point I was making was that in the case of beliefs for which we can give a causal truth tracking account, we can at least give some kind of account of why someone disagreeing with me could be *evidence* for me, and so count among possessed reasons I have that bear on the belief in question— whereas that is not the case with respect to a priori propositions in dispute between epistemic peers. I reiterate my earlier remark: the only way Van Inwagen can take cognisance of Lewis’s disagreement is by attending to what Lewis says about why he disagrees (etc.). The relevant sense of ‘can’ is rational epistemic possibility. You said this was clearly false, but it’s not. I did not say that the only way anyone could ever take cognisance was by attending to what is said, but I said it of *Van Inwagen* in relation to *Lewis* with respect to the *metaphysical issue of persistence of identity through time*. The crucial point here is that there comes a point at which other people’s reports of their disagreement may have no rational significance. They have to say something which bears on the issue in virtue of the relevance of the content of what they say to the issue. That there are other occasions in which I might, as you put it, ‘change my opinion based on the fact of your conclusion’ because ‘All I need is a belief that your conclusions tend to be well-founded.’ is irrelevant.

Expand full comment

Nick S, you seem willing to grant that ordinary disagreements may be unreasonable, but want to carve out an exception for disagreements about "philosophical doctrines and other a priori truth." I admit I've always had trouble understanding some of these distinctions.

For most topics there is a sensible distinction between people with different basic evidence and people with different analysis of that evidence. The simplest Bayesian formulation does not allow for different analysis, but it can be straightforwardly generalized to allow for different analysis, and then the standard disagreement results remain.

If the concept of a priori truths is fundamentally different from different analysis of the same evidence, then the question is whether we can make sense of "impossible possible worlds" in which these apriori questions have different answers. If we can, then the usual Bayesian results will hold.

Expand full comment

"Furthermore, someone could argue that since reasonable disagreement is a fact of our lives, the very strong result about disagreement is a reductio of Bayesianism as a model for belief."

Why isn't this pure naturalistic fallacy? Plenty of gross reasoning errors are facts of modern-day human life. If you mean that it's a reductio of Bayesianism as a *descriptive* model of human belief, no one sane uses it that way.

Expand full comment

Robin: The formal results are certainly of considerable interest, but Bayesianism is a mathematical model of belief, and there naturally arises questions about the extent of its applicability. Of its nature Bayesianism dispenses with a great many distinctions between kinds of beliefs that have philosophical significance. The claims you are making about the weight of evidence given by disagreement are more appealing when applied to beliefs to which a notion of causal truth tracking makes sense. But when it comes to philosophical doctrines and other a priori truths, it makes less sense. Furthermore, someone could argue that since reasonable disagreement is a fact of our lives, the very strong result about disagreement is a reductio of Bayesianism as a model for belief.

Expand full comment

Guy, I fear I am reaching the limits of my ability to converse seriously with philosophers while trying to "pick up" their language via random readings. At least that interpretation seems preferable than what you seem to be saying, namely that we are each justified in giving ourselves the benefit of the doubt that our hunches are insight, in a way that we are not justified regarding others. In moral philosophy giving yourself more benefit of the doubt about the morality of your actions is considered a self-favoring bias. Why not here also?

You and Nick S. both seem to say that philosophers consider disagreement to be only weak evidence, whereas I say that in detailed Bayesian models it is very strong evidence. Could it be that philosophers are not sufficiently aware of this result? Or would that still not be convincing to them?

Expand full comment

Let's be clear first about the kind of disagreement we're disagreeing about. The claim isn't that reasonable disagreement is possible at the Peircian ideal 'end of inquiry'. It's that reasonable disagreement is possible at a point far from the end of inquiry, even if, at that point in time, it may not be possible to further advance it.

The dichotomy between incorrigible beliefs and beliefs that needs justification is exactly one that, as I remarked, many epistemologists reject. My belief that there is an external world, or that I have two hands, is certainly fallible, even corrigible, but I don't need to rule out the various ways in which it may turn out false to be justified in holding it, indeed even in KNOWING these things to be true.

Robin, you're interpreting the situation between van Inwagen and Lewis in precisely the way I suggested that many epistemologists would reject. van Inwagen needn't concede that he has a hunch or impression, weak or strong. He BELIEVES that p, not strongly inclined to believe that p. And if so, he can use p as a premise. This belief, again, isn't in any way infallible. The question though is whether the mere fact that he discovers the Lewis disagrees with him is reason for him to suspend judgement in it and re-endorse it only after he can show to his own satisfaction that he's right and Lewis wrong. Many epistemologists would deny that this gives him any such reason.

Expand full comment

Guy, I'm getting confused about our terminology. Let us say an "impression" is a hard-to-communicate influence on belief, and let us say a "hunch" is an impression that comes with another impression, which says that the first impression is based on some sort of not-fully-conscious but empistemicly justified basis of evidence or analysis. Following Nick's usage, let us only call a hunch an "insight" if it influences our beliefs in the right direction. Otherwise it is in error, a "misleading hunch." Finally, let us call it a "powerful hunch" if it comes with a hunch that its basis is unusually strong, compared to typical hunches.

Van Inwagen starts out aware that he has a hunch, which he reasonably presumes is insight. But then he becomes aware that Lewis disagrees, and so Van Inwagen must conclude that Lewis has a contrary hunch. At this point Van Inwagen could be justified in disagreeing if he had a powerful hunch, which Lewis is unlikely to have since such things are unusual. But if so he could just tell Lewis this fact, and then Lewis should change his mind, and they would not longer agree.

But if Van Inwagen has only a hunch of ordinary strength, I don't see what basis he has for thinking his hunch is more likely to represent insight than Lewis's. He is not justified in assuming that it is insight just because it is his hunch. Whatever his estimate of the relative quality of their hunches, he (and Lewis) should then choose a middle belief that reflects that relative quality.

Expand full comment

Are we talking here about incorrigible evidence, which is so certain that no doubt is possible? If that were the case, would it be logically possible for two people to have different and inconsistent incorrigible beliefs? I would say, no, that is logically impossible.

Hence, given the disagreement, the incommunicable beliefs and evidence must be admitted to be uncertain. Each person should estimate how much uncertainty he has about this incommunicable evidence, and they should communicate these estimates to each other. Since they have good faith beliefs in each other's rationality and honesty, they can view these estimates as unbiased, and should therefore each adopt the evidence which has the higher degree of estimated reliability. In this way they will reconcile their differences.

Expand full comment

Robin, let's distinguish the question what someone should believe given he knows the Lewis and Van Inwagen disagree and what VAN INWAGEN should believe given his knowledge that Lewis disagrees with him.

Since van Inwagen HAS a particular insight, my point was merely that the mere fact of Lewis's disagreement isn't sufficient for him to back from this insight and treat it as a mere impression of an insight. And if so, then he can use his insight as a premise both to the conclusion he gets, and to support his belief that Lewis is mistaken (I know this sounds too easy, but so are most responses to the sceptic). You seem to claim above (independently of the Bayesian argument) that it's part of our epistemic practice that van Inwagen would be expected to back from his insight. I doubt that this is so. Perhaps he would be expected to do so if he was a young graduate student of Lewis's, but he's not, and as I pointed out, cleverness is not a good index of accurate insight when the dispute is between leading philosophers (even if it would be a relevant factor if someone who knows he's not so clever is suspicious of an argument given by someone he knows to be far more clever). Of course there can be evidence that one's insights are less reliable than others', but I don't think that mere disagreement with David Lewis counts as such evidence.

Expand full comment

Echoing Hal, I know I would personally appreciate a layman's encapsulation of the "rational agents cannot honestly disagree" argument. I've tried to get through the pdf paper but it was unable to grasp it! Since, Robin, you cite it extremely frequently, it might be valuable to try to attempt such a thing...

Expand full comment

IMO the results on the impossibility of disagreement are probably the most marvelous, surprising and challenging in the study of human bias. Definitely worth a post, in fact well worth an entire book (which "someone" said they might write some day, eh?).

Expand full comment