Reasonable Disagreement

In his recent post Robin suggests that Van Inwagen is biased in his philosophical beliefs about free will, possible worlds and the nature of persons, on the grounds that to disagree with as clever a philosopher as Lewis (rather than suspend judgement, for example) cannot be reasonable. In the paper referenced, Van Inwagen concedes that he is not arguing that any particular philosophical positions are justified, just asserting that he believes some are. Van Inwagen’s main point is in fact that the use made of Clifford’s dictum (in brief, it is always wrong to believe on insufficient evidence) is biased, since it is applied to religious belief but not to other beliefs. Nevertheless, I think we could construct on Van Inwagen’s behalf an argument for the reasonableness of his disagreement with Lewis.

  1. Philosophers have no consensus on many important philosophical questions.
  2. Their disagreement cannot be adequately explained on the basis of communicable beliefs (even allowing for the general underdetermination of answers to philosophical questions by the considerations available).
  3. Therefore we must allow of there being incommunicable beliefs (which, when true, are incommunicable insights).
  4. There is no reason to think that philosophers are irrational.
  5. Both Lewis’s and my philosophical positions are justified with respect to their evidential bases.
  6. We have both examined everything we know to be relevant to the question at issue.
  7. Among the evidential base of justified philosophical belief are states that are either incommunicable insights or states we are unable to distinguish from such insights but whose content is erroneous.
  8. What can be communicated between us has not led either of us to realise that something we took to be an insight is an error.
  9. Therefore we can reasonably disagree.

He might also have further defended this by arguing that unless we are prepared to be accept that none of our beliefs are justified (i.e. to be a certain kind of sceptic), a similar story has to be told for all of our beliefs: that their evidential base may not be entirely communicable and the justificational relations may not be entirely transparent to us.

GD Star Rating
Tagged as:
Trackback URL:
  • Guy Kahane

    Let me add just two remarks:

    (1) ‘Incommunicable insight’ can be understood in two ways: an insight I can’t communicate to others as a matter of principle, or an insight I am simply unable, now, to communicate to others.

    (2) I think Nick has hit on a large part of the explanation for the apparent disagreement between Robin and many philosophers on the question of disagreement. Philosophers think about epistemology under the shadow of scepticism. Most of them consequently reach the conclusion that the standards for knowledge and justification used by the sceptic must be excessive. I suspect that the standard that Robin is assuming may be excessive in exactly this way. (This, by the way, is an undeveloped hunch, not an incommunicable insight.)

  • Guy is right that the relevant distinction here is hard vs. easy to communicate; presumably we rarely know if something is impossible vs. very hard to communicate.

    Accepting that Van Inwagen and Lewis both have hard-to-communicate evidence and/or analysis, and that they have studied that evidence best they can, it does not follow that they can reasonably disagree. They might each reasonably hold their beliefs if they did not become aware of the beliefs of the other, but once they do become aware, the fact of the others different opinion is a huge new piece of evidence which seriously questions their beliefs. They *must* ask themselves which of them is more likely to have made an error.

    Models of Bayesians (with common priors) in situations like this finds that if agents consider this issue reasonably, they will no longer disagree. This is the starting point for my argument (summarized at that disagreement is problematic.

    Guy, how am I at risk of endorsing skepticism?

  • Guy: Under either way of taking incommunicable insight the argument has traction. I suspect that Van Inwagen thinks some beliefs can be incommunicable in principle and many people appear to agree with him when they say things like ‘you can’t know if you weren’t there/if you’re not an x/if you haven’t experienced y’.

    Robin: You say ‘the fact of the others different opinion is a huge new piece of evidence which seriously questions their beliefs’. I think that is exaggerated. Of course, disagreement should give us pause. However, it’s not as if Lewis and Van Inwagen hadn’t communicated about these issues. Their opinions are held in the light of their discussions and readings of each others’ papers. More importantly, it’s not clear what kind of evidence the mere fact of disagreement is. How is Van Inwagen supposed to reason about, for example, endurance versus perdurance of objects, on the basis of knowing that Lewis disagrees. The fact that Lewis disagrees is not itself a fact that bears on the issue. The only way he can take cognisance of Lewis’s disagreement is by attending to what Lewis says about why he disagrees, which is to say, to attend to the reasons that Lewis offers for being a perdurantist rather than an endurantist (e.g. the problem of temporary intrinsics) . But of course, Van Inwagen has certainly done that, and in turn given his replies.

  • To summarize the Modesty Argument as it seems to be applied in this instance: “Inwagen should give up and agree with (Lewis)/(the majority), because even under Inwagen’s own lights, (Lewis)/(the majority) are more likely than Inwagen to have incommunicable justifications that are correct and therefore incommunicable insights.”

    As is often the case, again it seems to me that the Modesty Argument doesn’t provide good advice. To say that you must e.g. side either with the compatibilists or incompatibilists on the question of free will, is to impose a false dichotomy. Incommunicable insights indicate that there is something wrong at the roots – if Lewis can’t figure out where Inwagen’s incommunicable insights are coming from, then Lewis, not just Inwagen, has a problem. (If Lewis figures it out but Inwagen refuses to listen, that’s a whole different problem.)

    One of the problems with the Modesty Argument is that it tends to reduce arguments to a conflict between opposed sides. Even if the advice to join the opposing side sounds humble and self-abnegating, it still may not be the correct thing to do – the conflict itself may indicate a sickness of the field, and the correct and necessary course may be to search for third alternatives, which is rarely a Modest thing to do. (It can be a Modest thing to do if the modal opinion is that the field is in need of third alternatives.)

  • Guy Kahane

    Robin, the parallel with scepticism may go like this. Suppose a thinker sees that X and on the basis of X, and his other evidence, concludes that Y. Someone might object to his concluding that Y on the grounds that, ‘from the inside’, there’s no way of distinguishing seeing that X and merely having the impression that X. So the thinker can rely on X as a premise to support Y. He first needs to have a reason for thinking that his impression that X isn’t in error. If you accept this form of objection, it becomes extremely hard, at best, to resist scepticism. So, many conclude, we don’t need rule out the mere possibility of error, and we shouldn’t concede to the sceptic that our starting point isn’t X but only the impression that X. It seems to me that the situation in the type of disagreement we are discussing is similar. Van Inwagen needn’t concede that, until he can SHOW that it’s Lewis’s incommunicable insight that is in error rather than his, then he can utmost claim to have the impression of such an insight. (How much weight can be put on the fact that Lewis actually did disagree with Van Inwagen, whereas the Cartesian demon is only an hypothesis?)

    A second point. In philosophy, perhaps unlike in other areas of inquiry, greatness has little to do with verisimilitude. Some of the great philosophers are great precisely because they made great, wonderful, outrageous errors. As for cleverness, in philosophy cleverness is what goes into those parts of the inquiry that ARE fully communicable—those bits of the arguments of Van Inwagen and Lewis that they CAN fully share with each other. So Van Inwagen’s admission that Lewis is more clever may not make much of an epistemic difference here.

  • Guy Kahane

    The sentence above ‘So the thinker can rely on X as a premise to support Y’ should, of course, read ‘… can’t …’

    Eliezer, although there’s no much to be said for ending a dispute with each side’s appeal to an incommunicable insight, the views that such interminable philosophical disagreements indicate that there must be a third option ignored by both parties, or worse, that the philosophical question at state is only a nonsensical pseudo-question, are themselves old views that are subject to interminable philosophical disagreement… (Which isn’t to say they’re wrong, only that philosophers have given various arguments for them, arguments that other competent philosophers considered seriously yet rejected.)

  • Guy, yes philosophers can make great contributions even if their claims are in error, and yes cleverness may be pursued for reasons other than overcoming error, which weakens the correlation between cleverness and low error. But if you are evaluating the accuracy of your belief, then it still seems that you should estimate that clever people are more likely to be accurate, and to have insightful hard-to-communicate intuitions.

  • Nickolas, you doubt my description of disagreement as “huge” evidence, since you don’t know how one “is supposed to reason” about it, and “the only way [Van Inwagen] can take cognizance of Lewis’s disagreement is by attending to what Lewis says about why he disagrees.” This last quoted claim is clearly false; I do not have to know why you drew your conclusion in order for me to change my opinion based on the fact of your conclusion. All I need is a belief that your conclusions tend to be well-founded. As I commented before, exact models of Bayesians in such situations find that they can take such evidence into account, and that it bears hugely, so much in fact as to eliminate disagreement. Perhaps I should make a post explaining such results.

  • Eliezer, let’s say you initially assigned P(A) = 70%, P(B) = 25%, and P(other) = 5%, and then you learn that someone you respect assigns P(B) = 70%, P(A) = 25%, and P(other) = 5%. This might well induce both of you to raise your estimate of P(other). But we can still discuss P(A|A or B} and P(B|A or B), and regarding these the modesty argument is not obviously threatened by considering P(other).

  • Guy, you draw the parallel that skeptics ask you to refrain from believing what you see until you can show no demon fooled you, while Hanson asks Van Inwagen to show that Lewis is in error before he can disagree with Lewis. The difference is that we typically accept that people should be assumed equally capable until evidence is presented which suggests one is better, while we accept no similar symmetry between what we see and demon scenarios. To be clear, I am asking for weak evidence, not logical proof, i.e., features of Lewis that might plausibly but perhaps weakly correlate with higher or lower error rates in his hard-to-communicate intuitions.

  • IMO the results on the impossibility of disagreement are probably the most marvelous, surprising and challenging in the study of human bias. Definitely worth a post, in fact well worth an entire book (which “someone” said they might write some day, eh?).

  • Echoing Hal, I know I would personally appreciate a layman’s encapsulation of the “rational agents cannot honestly disagree” argument. I’ve tried to get through the pdf paper but it was unable to grasp it! Since, Robin, you cite it extremely frequently, it might be valuable to try to attempt such a thing…

  • Guy Kahane

    Robin, let’s distinguish the question what someone should believe given he knows the Lewis and Van Inwagen disagree and what VAN INWAGEN should believe given his knowledge that Lewis disagrees with him.

    Since van Inwagen HAS a particular insight, my point was merely that the mere fact of Lewis’s disagreement isn’t sufficient for him to back from this insight and treat it as a mere impression of an insight. And if so, then he can use his insight as a premise both to the conclusion he gets, and to support his belief that Lewis is mistaken (I know this sounds too easy, but so are most responses to the sceptic). You seem to claim above (independently of the Bayesian argument) that it’s part of our epistemic practice that van Inwagen would be expected to back from his insight. I doubt that this is so. Perhaps he would be expected to do so if he was a young graduate student of Lewis’s, but he’s not, and as I pointed out, cleverness is not a good index of accurate insight when the dispute is between leading philosophers (even if it would be a relevant factor if someone who knows he’s not so clever is suspicious of an argument given by someone he knows to be far more clever). Of course there can be evidence that one’s insights are less reliable than others’, but I don’t think that mere disagreement with David Lewis counts as such evidence.

  • Are we talking here about incorrigible evidence, which is so certain that no doubt is possible? If that were the case, would it be logically possible for two people to have different and inconsistent incorrigible beliefs? I would say, no, that is logically impossible.

    Hence, given the disagreement, the incommunicable beliefs and evidence must be admitted to be uncertain. Each person should estimate how much uncertainty he has about this incommunicable evidence, and they should communicate these estimates to each other. Since they have good faith beliefs in each other’s rationality and honesty, they can view these estimates as unbiased, and should therefore each adopt the evidence which has the higher degree of estimated reliability. In this way they will reconcile their differences.

  • Guy, I’m getting confused about our terminology. Let us say an “impression” is a hard-to-communicate influence on belief, and let us say a “hunch” is an impression that comes with another impression, which says that the first impression is based on some sort of not-fully-conscious but empistemicly justified basis of evidence or analysis. Following Nick’s usage, let us only call a hunch an “insight” if it influences our beliefs in the right direction. Otherwise it is in error, a “misleading hunch.” Finally, let us call it a “powerful hunch” if it comes with a hunch that its basis is unusually strong, compared to typical hunches.

    Van Inwagen starts out aware that he has a hunch, which he reasonably presumes is insight. But then he becomes aware that Lewis disagrees, and so Van Inwagen must conclude that Lewis has a contrary hunch. At this point Van Inwagen could be justified in disagreeing if he had a powerful hunch, which Lewis is unlikely to have since such things are unusual. But if so he could just tell Lewis this fact, and then Lewis should change his mind, and they would not longer agree.

    But if Van Inwagen has only a hunch of ordinary strength, I don’t see what basis he has for thinking his hunch is more likely to represent insight than Lewis’s. He is not justified in assuming that it is insight just because it is his hunch. Whatever his estimate of the relative quality of their hunches, he (and Lewis) should then choose a middle belief that reflects that relative quality.

  • Guy Kahane

    Let’s be clear first about the kind of disagreement we’re disagreeing about. The claim isn’t that reasonable disagreement is possible at the Peircian ideal ‘end of inquiry’. It’s that reasonable disagreement is possible at a point far from the end of inquiry, even if, at that point in time, it may not be possible to further advance it.

    The dichotomy between incorrigible beliefs and beliefs that needs justification is exactly one that, as I remarked, many epistemologists reject. My belief that there is an external world, or that I have two hands, is certainly fallible, even corrigible, but I don’t need to rule out the various ways in which it may turn out false to be justified in holding it, indeed even in KNOWING these things to be true.

    Robin, you’re interpreting the situation between van Inwagen and Lewis in precisely the way I suggested that many epistemologists would reject. van Inwagen needn’t concede that he has a hunch or impression, weak or strong. He BELIEVES that p, not strongly inclined to believe that p. And if so, he can use p as a premise. This belief, again, isn’t in any way infallible. The question though is whether the mere fact that he discovers the Lewis disagrees with him is reason for him to suspend judgement in it and re-endorse it only after he can show to his own satisfaction that he’s right and Lewis wrong. Many epistemologists would deny that this gives him any such reason.

  • Guy, I fear I am reaching the limits of my ability to converse seriously with philosophers while trying to “pick up” their language via random readings. At least that interpretation seems preferable than what you seem to be saying, namely that we are each justified in giving ourselves the benefit of the doubt that our hunches are insight, in a way that we are not justified regarding others. In moral philosophy giving yourself more benefit of the doubt about the morality of your actions is considered a self-favoring bias. Why not here also?

    You and Nick S. both seem to say that philosophers consider disagreement to be only weak evidence, whereas I say that in detailed Bayesian models it is very strong evidence. Could it be that philosophers are not sufficiently aware of this result? Or would that still not be convincing to them?

  • Robin: The formal results are certainly of considerable interest, but Bayesianism is a mathematical model of belief, and there naturally arises questions about the extent of its applicability. Of its nature Bayesianism dispenses with a great many distinctions between kinds of beliefs that have philosophical significance. The claims you are making about the weight of evidence given by disagreement are more appealing when applied to beliefs to which a notion of causal truth tracking makes sense. But when it comes to philosophical doctrines and other a priori truths, it makes less sense. Furthermore, someone could argue that since reasonable disagreement is a fact of our lives, the very strong result about disagreement is a reductio of Bayesianism as a model for belief.

  • “Furthermore, someone could argue that since reasonable disagreement is a fact of our lives, the very strong result about disagreement is a reductio of Bayesianism as a model for belief.”

    Why isn’t this pure naturalistic fallacy? Plenty of gross reasoning errors are facts of modern-day human life. If you mean that it’s a reductio of Bayesianism as a *descriptive* model of human belief, no one sane uses it that way.

  • Nick S, you seem willing to grant that ordinary disagreements may be unreasonable, but want to carve out an exception for disagreements about “philosophical doctrines and other a priori truth.” I admit I’ve always had trouble understanding some of these distinctions.

    For most topics there is a sensible distinction between people with different basic evidence and people with different analysis of that evidence. The simplest Bayesian formulation does not allow for different analysis, but it can be straightforwardly generalized to allow for different analysis, and then the standard disagreement results remain.

    If the concept of a priori truths is fundamentally different from different analysis of the same evidence, then the question is whether we can make sense of “impossible possible worlds” in which these apriori questions have different answers. If we can, then the usual Bayesian results will hold.

  • Eleizer: the argument would be
    1. Normative Bayesianism implies that people cannot reasonably disagree.
    2. People do reasonably disagree.
    3. What is actual is possible.
    4. Therefore people can reasonably disagree. (2, 3)
    5. Therefore normative Bayesianism is false. (1, 4 modus tollens)

    Robin: of course I grant that *any* kind of disagreement (not just ordinary ones) *might* be unreasonable. That’s not the issue. The point I was making was that in the case of beliefs for which we can give a causal truth tracking account, we can at least give some kind of account of why someone disagreeing with me could be *evidence* for me, and so count among possessed reasons I have that bear on the belief in question— whereas that is not the case with respect to a priori propositions in dispute between epistemic peers. I reiterate my earlier remark: the only way Van Inwagen can take cognisance of Lewis’s disagreement is by attending to what Lewis says about why he disagrees (etc.). The relevant sense of ‘can’ is rational epistemic possibility. You said this was clearly false, but it’s not. I did not say that the only way anyone could ever take cognisance was by attending to what is said, but I said it of *Van Inwagen* in relation to *Lewis* with respect to the *metaphysical issue of persistence of identity through time*. The crucial point here is that there comes a point at which other people’s reports of their disagreement may have no rational significance. They have to say something which bears on the issue in virtue of the relevance of the content of what they say to the issue. That there are other occasions in which I might, as you put it, ‘change my opinion based on the fact of your conclusion’ because ‘All I need is a belief that your conclusions tend to be well-founded.’ is irrelevant.

  • Heh. Nicholas, whilst I can see your inexorable chain of logic, one much wishes to know what possible desideratum – nay, not even the word of God spoken from out of the clouds – could possibly support “2. People do reasonably disagree” when no less a desideratum than NORMATIVE BAYESIANISM says otherwise.

  • Nicholas, just to amplify on that, what I’m challenging you to do is justify the step from “philosophical disagreement is a fact of our lives” to “reasonable philosophical disagreement is a fact of our lives”. That’s the key step, and you can’t just say you observed it – one wishes to know what justification for the term “reasonable” is strong enough to override normative Bayes. The fact that it *feels* very reasonable to you is not strong enough to override Bayes, because conjunction fallacies, and availability biases, and nine-tenths of the known fallacies in the lexicon, *feel* reasonable.

  • Nick S, I presume we agree that there is a true answer to the question of the “persistence of identity through time,” as otherwise there is no point in disagreeing about that topic. Even if you thought that this claim was either true in all possible worlds, or false in all possible worlds, in the usual sense of a self-consistent “possible world,” we can invoke the concept of an “impossible possible world” which need not be self-consistent.

    Using this concept of an impossible possible world, we can formally express the idea that as you proceed with error-prone analysis, you may mistakenly believe a claim that is not consistent, but that as you conduct more analysis you will make fewer such mistakes. In such a framework the fact that someone else proceeding similarly disagrees with you is a powerful relevant clue about the errors in your analysis, just as ordinary disagreement is a powerful clue about ordinary inference.

  • I see this is in effect being further addressed in new posts, so I’ll just answer the last two points and then leave it.
    Eliezer (apologies for mis-spelling your name in the last post): The answer to your challenge is in the original post, where I offered precisely such an argument. Secondly, I think you are mistaken about where the burden of proof lies. No one is disputing that sometimes there are unreasonable disagreements, which is all that your examples go to show. But you are simply assuming that Bayesianism is true. My point is that it is no less reasonable, and perhaps more reasonable, to start from the premiss that people do reasonably disagree (indeed, some would argue that we are morally required to accept that premiss), and if Bayesianism conflicts with that, so much the worse for Bayesianism.
    Robin: It is not that I am uninterested or unsympathetic to the formal results, but I am bringing into view ways in which it might be argued that the formal model seems to give the wrong answer. Guy brought out the point about idealisation at the end of enquiry versus our situation. There is a lot to be discussed about whether and when idealisation is a clarification rather than obscuration of philosophical issues. I have been pressing on a different point, namely the the requirement that reasons for a belief should have content that is relevant to the truth of the content of the belief. I have drawn your attention to a specific argument about endurance versus perdurance which turns on the problem of temporary intrinsics. Attending to that argument shows why Lewis thinks temporary intrinsics means perdurance is true whilst Van Inwagen does not, and the reasons that Lewis adduces are other metaphysical doctrines, in particular, doctrines about what it is to be an intrinsic property. The belief in their disagreement has no content that bears. It is quite irrelevant. If you add it as a premiss to either of their arguments it sits as an idle cog. It can neither justify nor defeat any of the reasons they adduce in this dispute. So I have given an example of the way in which disagreement, at least prima facie, has no rational significance, and your reply is to say, well yes, but in my formal model with impossible possible worlds it does. Fine, say I, so much the worse for your model!

  • Nick, it seems the issue we need to consider more to further explore this topic is the appropriateness of Bayesian-like analysis. But as it is a framework intended to account for a wide range of issues in inference, we should judge it overall in terms of all of the intuitions it may or may not conflict with, relative to other possible frameworks of analysis. Since we expect some of our intuitions to be in error, finding a few conflicts with intuition should not discourage us from embracing Bayesian analysis. I invite you to take the first shot by posting sometime on what you see as the most serious problems with the Bayesian approach.