27 Comments

I second the suggestion that the post could have just as easily been titled "Beware disagreeing with van Inwagen"--the positions of van Inwagen and Lewis are perfectly symmetrical (at least as judged by all externally evaluable criteria), that's exactly what makes the case interesting...

Expand full comment

PvI doesn't admit that Lewis is a better philosopher, he just says he's a truly formidable philosopher--by which he means, I think that he doesn't think that *he's* better than Lewis.

Expand full comment

Philip, on reflection your position seems much less tenable than Peter van Inwagen's position, and "bald" isn't too strong a word. Peter admits that his easily communicated evidence is the same, but he justifies his disagreement by referring to his hard to communciate evidence, the sum of all his experience and subsconsious intuition. You, in contrast, admit that *all* of your evidence, no matter how easy or hard to communicate, is exactly identical, but nevertheless you are right and your epistemic duplicates are wrong. This seems quite a bald and unsupportable claim.

Also, while yes it is possible to be too timid, surely on average overconfidence is by far the most common error, especially among philosophers.

Expand full comment

Robin, For what it’s worth, here’s a brief summary of the dialectic. On my view, my knowledge that I am actual comes directly from the indexical analysis I give of the concept of actuality. That is not baldly stated or assumed. But then, in responding to Lewis’s argument, I have to say why my knowledge isn’t defeated by merely possible people who also claim to know that they are actual. Here I argue that actuality is what I call a perspectival concept, and that knowing that one is actual is a matter not only of having the right concepts (and evidence), but of having the right perspective. Finally, I note (what some would take as an objection) that this solution requires adopting a form of epistemic chauvinism. But I don’t take this to be an objection because, as I say in the quote you give, I am committed to epistemic chauvinism in any case with respect to a priori knowledge generally.

I did not say that the vast majority of philosophers agree with my view about possible worlds (they don’t), just that they agree with the principle about actuality I gave that is compatible with my view and not with Lewis’s.

I agree that there is a risk of being overly confident. But there is also a risk of being overly timid, of not trusting what one sees before one’s eyes, or one’s mind’s eye. We do not disagree substantially as to the various factors involved, just as to how they should be weighed.

Eliezer, The “light of reason”, of course, is a term of art and not to be taken literally. I agree that the fundamental assumptions I accept, the unprovable axioms of my theories, are represented in my brain, and that some computational process occurs when I contemplate them, understand them, and evaluate them as true. It is not my business to know what is going on at the level of computation – I can’t help you there. And, of course, I don’t think what is going on is that “they are judged under a higher criterion” – that way lies infinite regress. I think what is going on in the case of philosophical or metaphysical assumptions is not substantially different from what is going on in the case of mathematical assumptions, say, the axiom of mathematical induction of Peano arithmetic, or the axiom of choice of ZF set theory. If you also hold in those cases that we are never justified in believing the axioms true, then we are far apart indeed.

Expand full comment

Eliezer,

What advances has AI made into how computation results in subjective awareness of perception, awareness of thinking, etc.?

Certainly there is an assumption on many people's part that all these things are nothing more than substrate-neutral computation. But others disagree, such as Chalmers. I'm curious what light AI research has shed on this subject.

Expand full comment

Bricker, my main problem is with "The light of reason, I simply conclude, shines on me and not on them."

My profession is Artificial Intelligence, and it is never far from my mind that cognition is a causal phenomenon. You say: "But at the end of the day, as all philosophers know, there will be fundamental assumptions that cannot be proven, and that other philosophers, equally “smart” and “rational”, do not accept." I don't accept this principle myself. But even if it were true, these "fundamental assumptions" are not outside the web of causality that wends through physics and biology and psychology. These "fundamental assumptions" must be represented somewhere in your brain.

If I try taking the statement about "the light of reason" literally, and drawing a little causal network of it, it might look like this:

[Light of reason] -> [Bricker] -> [Opinion about possible worlds.]

If "the light of reason" is capable of affecting your opinions (and certain more obviously physical variables, such as the movement of your fingers on the keyboard) then, as an AI researcher, I would like to know what this light of reason is, and how to shine it on an AI, too.

Cognition is a causal phenomenon. A difference of cognition implies some difference of computation. You and Lewis certainly have genetic differences, or perhaps differences of brain development; you are different human beings. You might even be able to attribute your different opinions about possible worlds to some brute difference of cognitive processing - though this I doubt very highly. My question is, in what sense, and by what criterion, you call this difference "The light of reason." Or if I change the viewpoint to that of "fundamental assumptions", then either (1) you have adopted your fundamental assumptions in the light of some higher criterion, reason, which makes these fundamental assumptions arguable and not really "fundamental"; or (2) these fundamental assumptions are not judged under any higher criterion, in which case you are not justified in referring to your fundamental assumptions as a "light of reason" that shines on you and not others.

Expand full comment

Philip, perhaps I did not appreciate the full context of your "epistemic chauvinist" declaration. You accepted that the usual knowledge criteria did not give you knowledge of your "actual" status, and you then seemed to simply declare that you could believe anyway. That seemed to me a bald and unjustified stance.

Now you point out that the vast majority of philosophers agree with your conclusion, and I'll accept this as meeting my challenge, being a substantial consideration that weighs in your favor.

I'll continue, however, to argue that you should consider how many people, how smart and well read on the topic, think what, when deciding if you are justified in disagreeing with them. If you disagree then someone has made an error, and you must try to estimate who is more likely to have made an error.

The "evidence of your own thought" would have to be unusually strong to overrule such considerations. If you were reliable in only overruling such considerations when you had especially strong evidence, then others would reasonably change their minds when facing such unusual insistence on your part, and you would no longer disagree. But clearly humans are biased, so that usually when someone prefers the evidence of their own thought, it is due to overconfidence and not especially powerful private evidence. So the question is how sure can you be that you are not expressing the same usual overconfidence?

In this case you can reasonably say that it is more likely that one very smart person made an error than that the vast majority of philosophers have made an error.

Expand full comment

I did not grant that "Lewis is smarter" (though I am happy to do so) or "that the usual arguments favor his side." Some familiar arguments favor his side, some favor mine, some go against both of our views. And I do think (and argue in the paper) that widely and strongly accepted principles strightforwardly support my view over his: for it is widely accepted that, if merely possible objects exist (or have any sort of being), then they differ in ontological status from actual objects. That principle is incompatible with Lewis's view. Perhaps you are not aware that only a tiny minority of philosophers accept Lewis's views on the existence and nature of possible worlds.

What I find strange about your comments is the weight you place on appeals to authority, and worse, appeals to "smartness" (whatever that is). Since philosophical authorities, even very smart ones, often disagree, appealing to authority in philosophy is demonstrably a bad policy. That could be a reason to accept my response (1) above, and be agnostic about (almost) all matters of fundamental metaphysics (or to be relativist, or to deny that such statements have truth values). But if justification is largely an internal affair (as I believe), then one can find oneself in a position of believing one is justified in one's philosophical views, even knowing other philosophers disagree. Their being "smart" no more requires you to withhold belief in this case, then if a "smart" person told you not to believe what you plainly see before your eyes. In both cases, what they say has some weight; but it may be overshadowed by the evidence of your own senses, or, in the philosophical case, the evidence of your own thought.

Expand full comment

Philip, sorry you felt caricatured, and thank you for engaging. You speak of "being unable to prove that belief" and of having "no `knockdown' arguments against opposing views," but such situations are not all the same. In your situation you seemed to grant, as I said, that "Lewis is smarter and the usual arguments favor Lewis's side."

Surely you would be more justified in your disagreement if you thought you were smarter, the better philosopher, more familiar with this issue, with a very widely and strongly accepted principle straightforwardly supporting your position. So if instead these considerations favor Lewis, you must be less justified.

Unless you want to argue that you are always justified in disagreeing when no proof has been found either way, considerations like these must be relevant. If these considerations weigh against you, what considerations do you see that weigh in your favor?

Expand full comment

I received an e-mail from Robin Hanson with subject line: “you are featured today on OvercomingBias.com.” Caricatured, is more like it. I certainly don’t express my disagreement with Lewis over the nature of actuality as “just because, that’s why.” In the paper cited (on my website), I spend over 16,000 words defending my view, presenting and evaluating arguments and counterarguments. An “epistemic chauvinist”, as I use the term, does not abrogate her obligation to consider all the evidence, all the arguments. But at the end of the day, as all philosophers know, there will be fundamental assumptions that cannot be proven, and that other philosophers, equally “smart” and “rational”, do not accept. There are three responses: (1) withhold all belief in philosophical theses; (2) believe, but deny that one is justified in believing (a psychological impossibility for me); or (3) believe, and believe one is justified in believing, even though one has no knockdown argument against one’s opponent, even though one’s belief rests on assumptions that other philosophers reject. As an epistemic chauvinist, I opt for response (3). That, of course, is not the end of the story. One still needs an account of “justification” in these matters, and of what role (if any) intuition and insight can play. And, yes, one still needs an account of how so many other “smart” philosophers can be getting it wrong.

By the way, with respect to epistemic chauvinism, I am in full agreement with David Lewis. He often used the example of alternative logics. A philosopher who rejects the law of non-contradiction cannot be swayed by rational argument: it does no good to catch them in a contradiction! But Lewis nonetheless believed, and thought he was justified in believing, in classical logic. Examples could be endlessly multiplied. (Newcomb’s problem is another favorite of Lewis’s: he was a committed 2-boxer, but also convinced that no “argument” could convince a diehard 1-boxer to change their view.)

If claiming to have justified belief in one’s philosophical view, even though one has no “knockdown” arguments against opposing views, and no non-question-begging ways to support (much less prove) one’s fundamental assumptions, is properly called “bias”, so be it. But then distinguish. To hold a belief, and claim to be justified in that belief, and yet refuse to examine the evidence or arguments against the belief is bias of an objectionable sort. To claim to be justified in a belief after examining all the evidence and arguments, in spite of being unable to prove that belief to the satisfaction of other smart people who have also examined the same evidence and arguments, is not an objectionable form of bias – or not obviously so.

Expand full comment

Eliezer, yes, even if they accepted that Lewis was more likely to be right than either if them, their job is far from done. And it might be best for them to explore the opinions that they were initially inclined toward, even if they think Lewis was more likely to be right. The question of what your best estimate should be at the moment is not at all the question of which ideas should be explored when by whom, and when anyone is "done." And you should always be uncomfortable when all you have is some intuitions you can't seem to articulate.

Expand full comment

Robin, the problem here goes deeper than willful stubbornness, and does not originate with willful stubbornness, which is why it'd be harder to find an example of a similar confession in physics. (The closest thing I can think of is Einstein's insistence that God does not play dice, and when you think about Everett, Einstein was more right than wrong.)

And, you can't repair the deeper problem by trusting someone else's insights, or even their arguments.

Suppose one group of analytic philosophers feels that they ought to take only Box B in Newcomb's Problem, and another group feels that it can only be rational to take both boxes. Starting to argue over how many boxes you should *really* take is exactly the wrong reaction. It's obvious enough why human beings would do that - there's two obvious sides and two obvious groups and an obvious fight to be fought. But it's still the wrong approach. The problem is somewhere in the shape of human intuitions. Victory is when you can see how human intuitions create the problem, and then the apparent confusion goes away.

Suppose you professed that possible worlds existed, because David Lewis said so. What would you really know? What would you really understand? Would you feel less confused? Would the subject seem less mysterious? Mind you, I am not saying the answer is wrong - I am saying that, even if it happened to be true, you would not be finished. You could not declare victory and stop. You would not be done, because you would still feel confused.

Feeling that your own incommunicable insight is more trustworthy than someone else's is not a reason to declare victory. How can you declare victory over an intuition you don't understand? Even if you argue, and win the argument, you won't know any more than you did when you started out. Even if your intuition is *true*, you wouldn't have helped yourself by arguing. If there was no one to argue with, if everyone agreed with you, you would have just as much work left to do before the mists of confusion blew away within your own mind.

One problem with saying that Inwagen ought to adopt Lewis's statement is that if Lewis cannot explain to Inwagen how Inwagen's own intuitions work, and thereby cause Inwagen to stop seeing the problem as mysterious, then Lewis isn't finished with his job either. Unless Lewis knows exactly how Inwagen's mind is operating, and Inwagen is just too stubborn to listen, which would be a different problem. The point is, I don't necessarily trust that Lewis is completely done with his job, either, if Lewis can't explain to Inwagen what it is about the shape of human psychology that, acting on this problem, produces Inwagen's apparent "incommunicable insight".

An even larger problem with saying that Inwagen should adopt Lewis's viewpoint is that it makes it into a people-fight, an argument over whether "free will" is or isn't incompatible with "determinism". It makes you think that progress is winning the argument, rather than unweaving the question. It makes you think there are sides, rather than a confusion.

Dennett, in "Breaking the Spell", points out that while many religious assertions are very hard to believe, it is easy for people to believe that they *ought* to believe them. Dennett terms this "belief in belief", and suggests that much religious belief is actually religious profession. Suppose Inwagen were to profess that free will is compatible with determinism, because David Lewis said so. What progress has been made? I used to believe that light was made of waves, because physicists told me so. But since it turned out that I had no idea what they meant by "wave", my profession was completely useless, and it is questionable whether there was any sense in which my belief, or rather profession, could be described as "true".

Expand full comment

Eliezer, 1) Philosophers are to be praised for explicitly considering these issues; others usually ignore them. 2) I don't know Lewis's intellectual history. 3) Deliberative reasoning need not be more reliable, but the point is what grounds do you have for thinking your hidden reasoning is better, if your visible stuff is worse. 4) They believed they were talking about something real, that seems enough to me. 5) These others believed Lewis was smarter, that seems enough.

Expand full comment

There's all sorts of uncomfortable questions I could ask here. Here's a few of them:

1) Why didn't you give an example from physics, or economics, of someone saying in a published paper that they believed X, when others believed Y, solely on the basis of an incommunicable insight? Why pick on the poor field of analytic philosophy?

2) How did David Lewis arrive at the positions that he argues? Did he start out not knowing, and then deploy the same kind of reasoning that he is now using in his arguments, and thereby arrive at his opinion? Or did Lewis start with an incommunicable insight that told him what his initial position was, and then figure out lots of clever arguments for it?

3) The distinction between rationality and irrationality should not be confused with the distinction between System 2 and System 1 (deliberative and perceptual judgments). What evidence do you have that the kind of reasoning used in philosophical papers is more reliable than a feeling of incommunicable insight?

4) More reliable at doing what, exactly? How do you tell whether a philosophical position is right or wrong? Suppose David Lewis is right about everything. What harm will come to Inwagen for his obstinacy? Wrong experimental predictions? Inwagen builds a toaster oven, and it doesn't work? What bad thing happens to him, exactly?

5) How does anyone know that David Lewis is highly intelligent? IQ test?

Now, I don't mean this as quite as severe a criticism of the subject matter of analytic philosophy as it may sound. I have some idea of what awful real-world penalties might befall me, in my profession, if I came to a poor understanding of Newcomb's Problem. But I do have a pretty severe criticism of a common methodology in analytic philosophy, which is to deploy arguments and visualizations and scenarios with the intent of pumping the other's intuition and thereby communicating the feeling of incommunicable insight. This is like deliberately sneezing on someone when you have a cold. If you have an incommunicable insight, it means something is wrong - there's something you don't understand about your own psychology, how your own mind is analyzing the problem. The business of trying to pump intuitions just endlessly replicates the problem, which almost always turns out to lie in the shape of the intuitions themselves.

Inwagen's mistake lies in trusting his own intuitions when he doesn't know where they come from - when, even by Inwagen's own lights, he has not resolved the mystery, and cannot possibly be finished with the problem.

Expand full comment

Robin, what do you mean by "enough pursuit"? I think it's pretty obvious that we'd get less philosophy than we have today if people had unbiased beliefs, so do you think we have an overproduction of philosophy today?

Regarding seeing beliefs as values, I'm not saying that it explains all disagreements, just those arising from differing priors. A disagreement amongst two friends over the chance of rain tomorrow is almost certainly not caused by differing priors. But in any case, it's not central to my argument that people admit that their priors represent values. We can interpret their priors as representing values regardless of what they claim.

Expand full comment

Simon, the issue here isn't whether Lewis is right, but whether the other two people were justified in disagreeing with him.

Wei, yes, the people who believe the most are most likely to pursue an idea. This does not mean we would not get enough pursuit if people had unbiased beliefs. Regarding seeing beliefs as values, ask two friends for the chance of rain tomorrow. If they give different numbers, ask them if this is because the one who gave a higher number values rain more.

Paul, there are formal approaches to this topic that avoid infinite regress. (See: http://hanson.gmu.edu/decie.... And everyone accepts that not everything known can be communicated. The point is that these philosophers admitted that Lewis was a better philosopher with a better handle on explicit arguments, and that the explicit arguments identified favored Lewis. So what possible grounds could they have for thinking that their hard to communicate knowledge was better than his?

Expand full comment