Beware of Disagreeing with Lewis

David Lewis, my guess for the most important philosopher of the last half century, seems to reduce philosophers who disagree with him to saying "just because, that’s why."  Consider Peter van Inwagen and Phillip Bricker.

Peter van Inwagen’s 1992 paper "It Is Wrong, Everywhere, Always, and for Anyone, to Believe Anything upon Insufficient Evidence?" was the first in the modern series of philosophy papers on the rationality of disagreement.    He wrote "a polemic against what I perceive as a widespread double standard in writings about the relation of religious belief to evidence and argument":   


How can I believe (as I do) that free will is incompatible with determinism or that unrealized possibilities are not physical objects or that human beings are not four-dimensional things extended in time as well as in space, when David Lewis–a philosopher of truly formidable intelligence and insight and ability–rejects these things I believe and is already aware of and understands perfectly every argument that I could produce in their defense?  … I suppose my best guess is that I enjoy some sort of philosophical insight (I mean in relation to these three particular theses) that, for all his merits, is somehow denied to Lewis. And this would have to be an insight that is incommunicable – -at least I don’t know how to communicate it–, for I have done all I can to communicate it to Lewis, and he has understood perfectly everything I have said, and he has not come to share my conclusions. But maybe my best guess is wrong. I’m confident about only one thing in this area … it must be possible for one to be justified in accepting a philosophical thesis when there are philosophers who, by all objective and external criteria, are at least equally well qualified to pronounce on that thesis and who reject it. … if you grant that evidence may include incommunicable insight, can you be sure, have you any particular reason to suppose, that it is false that there are religious believers who have "insight" that lends the same sort of support to their religious beliefs that the incommunicable insight that justifies your disagreement with Kripke or Quine or Davidson or Dummett or Putnam lends to your beliefs?

Thus the one thing van Inwagen is most sure of is that he must somehow be justified in disagreeing with someone much smarter who has understood all the same communicable evidence. 

In the December 2006 issue of Philosophical Perspectives, Phillip Bricker takes issue with Lewis’s famous conclusion that actuality is relative, that each possible world is real on its own terms.   Bricker says that Lewis’s strongest challenge to the idea that only our world is really actual is this:

There are concrete merely possible people who are epistemically situated exactly as we are: there is no evidence that can distinguish our predicament from theirs.  But then we can’t rule out the possibility that we are the merely possible people inhabiting a merely possible world.

Bricker accepts this lack of distinguishing evidence, and that usually it undermines claims to knowledge.   On this topic, however, Bricker is an "epistemic chauvinist," who

holds that her beliefs may constitute knowledge even though another subject, actual or possible, with the same evidence, the same concepts, and the same powers of reasoning holds beliefs contrary to hers.  … I claim to know that possibilia exist in spite of the preponderance of benighted philosophers who disagree.  It’s not that I think there are any non-question-begging arguments that will force them, by the light of reason, to see the error of their ways.   The light of reason, I simply conclude, shines on me and not on them.

Bricker also thinks he must be justified in disagreeing with Lewis, even if Lewis is smarter and the usual arguments favor Lewis’s side.  Bricker’s and van Inwagen’s unreasoned disagreements with Lewis seem to me to be clear cases of bias, i.e., people who should admit their case is weak, and who should at least be much less certain.   Beware of disagreeing with David Lewis!

(FYI, Arguments similar to Lewis’s also question how we could know that we have been conscious or that any part of us is new conscious.)

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://pdf23ds.net pdf23ds

    This is pretty much the form the debate on the existence of qualia and mental dualism/monism usually takes too, in my experience. Maybe among real philosophers this particular subject is a bit less dogmatic, though.

  • Tweedledee

    Three possible responses to the bias charge spring to mind:
    1)Philosophy is not a science and great philosophers are often wrong even about the subjects on which they are expert for reasons including ego, status, career, etc. This being the case, one must discount the weight that one gives to a philosophical argument according to the probability of alterior motives influencing the person making the argument. Thus, even if I think that someone has a better argument, but that they are sufficiently non-motivated/incentivized to seek the truth, I may still put a higher p on my admitedly inferior argument being true.

    2) It may be that Van Inwagen and Bricker have non-propositional evidence that is not common knowledge and that leads them to discount Lewis’ arguments. For instance, perhaps knowing that an argument is right is like riding a bicycle insomuch as it’s not something one understands discursively but is simply something that one feels. Van Inwagen and Bricker may feel strongly that Lewis’ argument is wrong in a way that they can’t convey to Lewis.

    3) Low priors on Lewis’ conclusion lead them to discount his argument to a sufficient degree even though they still estimate that it is superior to any argument that they have.

    On the most direct reading of the passages you’ve cited, it seems to me that Van Inwagen and Bricker are more or less saying 2.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Twee, I don’t see why philosophers would be more likely to have ulterior motives. And if you thought that Lewis might not be aware of his ulterior motives, you’d need a good reason to think that they were less of an issue for you. Similarly, if non-communicable knowledge were influencing your beliefs, they might also be influencing Lewis’s beliefs, and you’d need a good reason to think that your such knowledge was better than his. If your priors differed, you’d need a good reason to think that your priors were more related to truth than his priors are.

  • http://www.weidai.com Wei Dai

    What do probabilities mean if we agree with Lewis that all possible worlds exist and that actuality is relative? When we say the probability of x is y given prior P and evidence E, we must really mean that given a measure M, the measure of worlds where a version of me has observed E and x is true divided by the measure of worlds where a version of me has observed E is y.

    The measure M over all possible worlds takes the place of the prior P, and it no longer seems to have much to do with “truth”. Given that in decision theory, probabilities are eventually used to multiply with utilities to form expected utilities, it makes more sense to me to interpret this measure M as describing how much one cares about classes of possible worlds. The more one cares about a class of possible worlds, the greater its measure, and the more contribution it makes to the computation of expected utilties.

    Therefore I suggest that disagreements over facts that stem from different priors are really disagreements over values, and that’s why they are so persistent.

    But the explanation for these particular disagreements amongst the philosophers is simpler: philosophy as a profession is especially attractive to those who are confident in their own intuitions and reasoning abilities, and philosophy as an institution encourages disagreements and arguments. I think the bias is good in this case because otherwise we would probably see a much reduced philosophical output.

  • simon

    It is logically possible that I will observe my pen levitating in the air at a height of 10, 20, or 30 (etc) centimetres ten seconds from now. For each height there is a logically possible world that is identical to this world up to that point, and then the pen begins levitating at that particular height. If all logically possible worlds are real, why should I believe that I am going to observe the particular world where the pen just sits there?

    *observes pen sitting there*

    Is this not strong experimental evidence against Lewis’s claim that actuality is relative?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Simon, the evidence you refer to is evidence that in you are in a particular possible world; the question is whether others are in other possible worlds.

    Wei, overconfidence may induce effort, which might be good, but it would be even better if people would explore various ideas without actually believing differently in them. Is that so much to ask?

    And yes, one might abstractly interpret differing priors as differing values, but the people involved usually deny that this is what is going on.

  • http://profile.typekey.com/simon112/ simon

    There are many more logically possible worlds that share the past history of this world, but diverge from the laws of physics in the future, than there are possible futures for this world that continue to follow laws of physics. This is still true when we consider only possibilities where the deviation from past physics does not threaten the continued existence of our minds, as in my example of my pen levitating. The evidence we have so far does not distinguish between these possibilities. Therefore, anyone who believes that all possible worlds are real should predict that the laws of physics will be grossly violated in the future. The failure of this to happen is evidence against this point of view.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Simon, you are not required to have probabilities that are uniform across all logically possible worlds. And you are allowed to use your previous experience to infer which world you are in. Any inference you can do assuming your world is the only actual one you can also do if the other possible worlds are just as real. Take the thread title seriously here – Lewis has considered these issues.

  • http://profile.typekey.com/simon112/ simon

    It seems to me that if you don’t have a uniform distribution (over worlds or observers or observers identical to yourself or something), you are either in effect assuming that the worlds to which you assign a lower probability have a lower probability of being real (or are somehow “less real”) or you are assuming some kind of metareality* in which your particular probability distribution applies, and that other logically possible metarealities with different probability distributions are less real, either one of which would be inconsistent with Lewis’s views. And you can’t use your previous experience to distinguish between worlds with identical pasts.

    You say Lewis has considered these issues; if I knew what these counterobjections were I could attempt evaluate them, but not knowing what they are I can only guess how valid I would find them given the information I have at hand:

    1. He was a well known philosopher
    2. You endorse him
    3. Other philosophers use bad arguments against him

    2 helps but 3 is almost irrelevant since it is behaviour I expect from philosophers with fairly high probability; I don’t find 1 very impressive.

    *I don’t know what word I should be using

  • http://www.weidai.com Wei Dai

    “Wei, overconfidence may induce effort, which might be good, but it would be even better if people would explore various ideas without actually believing differently in them. Is that so much to ask?”

    Suppose I think some idea is interesting but only somewhat likely to be true. Another person believes it is very likely to be true. All else being equal, who is more likely to spend a lot of time and energy exploring its consequences, writing a paper about it, getting that published, etc.? I think you’re never going to change the fact that the proponent of an idea is usually someone who believes in it too much.

    “And yes, one might abstractly interpret differing priors as differing values, but the people involved usually deny that this is what is going on.”

    Really? How can they deny that differing priors represent differing values, when very few people have even heard of the idea? Is there a literature on this topic that I’ve missed?

  • Paul Gowder

    This is totally just an off-the-cuff-idea, it may be stupid.

    That being said: to what extent does the assertion that this constitutes “bias” depend on an agent-relative perspective that, once abandoned, leads to a meaningless infinite regress?

    In other words, the structure of the argument is as follows: B (Lewis) takes a position. A (van Inwagen) takes a position different from B. A explains A’s position to B. B understands it completely (by A’s lights), but fails to accept it. A thus has reason to doubt A’s position.

    But A might want to consider the fact that the same argument could be applied to B — at least if B is participating in the discussion in good faith and offering his reasons for disagreeing with A, which A understands. B has reason to doubt B’s position because of A’s disagreement just because A has reason to doubt A’s position because of B’s disagreement.

    But if this is true, then A has less reason to doubt A’s position than previously thought, for A knows that B is subject to the same bias which you’ve diagnosed A with, and is accordingly entitled to discount B’s truth-finding process. Ditto, the other way around, with B. So, uh, what does this kind of argument prove? Why not entitle the post “Beware of disagreeing with van Inwagen?”

    (Assume away the suggestion that B is more intelligent than A. I might also want to relax van Inwagen’s own assumption that Lewis understood his argument — the statement that someone *fully understood* your argument and disagrees seems to be much more problematic than the statement that someone heard your reasons and had the capacity to understand and disagreed. The former suggests that they understand all of it, i.e. all the reasons for believing it.)

    Here’s another thing we might offer to respond to this sort of argument. Not all kinds of arguments are complete, or can be made complete. Let’s go all the way back to Kant. Kant pointed out that in order to bring individual instances under concepts, the faculty of judgment is needed, and judgment can’t be bound by rules. (It’s infinite regress argument: there can’t be a rule for applying rules, because there’d have to be a rule for applying that rule, etc. etc.) The incommunicable difference between van Inwagen and Lewis might be something like that. For example, they might just assign different weights to various considerations that don’t have a rule specifying how they’re to be weighted.

    • John 4

      I second the suggestion that the post could have just as easily been titled “Beware disagreeing with van Inwagen”–the positions of van Inwagen and Lewis are perfectly symmetrical (at least as judged by all externally evaluable criteria), that’s exactly what makes the case interesting…

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Simon, the issue here isn’t whether Lewis is right, but whether the other two people were justified in disagreeing with him.

    Wei, yes, the people who believe the most are most likely to pursue an idea. This does not mean we would not get enough pursuit if people had unbiased beliefs. Regarding seeing beliefs as values, ask two friends for the chance of rain tomorrow. If they give different numbers, ask them if this is because the one who gave a higher number values rain more.

    Paul, there are formal approaches to this topic that avoid infinite regress. (See: http://hanson.gmu.edu/decieve.pdf). And everyone accepts that not everything known can be communicated. The point is that these philosophers admitted that Lewis was a better philosopher with a better handle on explicit arguments, and that the explicit arguments identified favored Lewis. So what possible grounds could they have for thinking that their hard to communicate knowledge was better than his?

    • John 4

      PvI doesn’t admit that Lewis is a better philosopher, he just says he’s a truly formidable philosopher–by which he means, I think that he doesn’t think that *he’s* better than Lewis.

  • http://www.weidai.com Wei Dai

    Robin, what do you mean by “enough pursuit”? I think it’s pretty obvious that we’d get less philosophy than we have today if people had unbiased beliefs, so do you think we have an overproduction of philosophy today?

    Regarding seeing beliefs as values, I’m not saying that it explains all disagreements, just those arising from differing priors. A disagreement amongst two friends over the chance of rain tomorrow is almost certainly not caused by differing priors. But in any case, it’s not central to my argument that people admit that their priors represent values. We can interpret their priors as representing values regardless of what they claim.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    There’s all sorts of uncomfortable questions I could ask here. Here’s a few of them:

    1) Why didn’t you give an example from physics, or economics, of someone saying in a published paper that they believed X, when others believed Y, solely on the basis of an incommunicable insight? Why pick on the poor field of analytic philosophy?

    2) How did David Lewis arrive at the positions that he argues? Did he start out not knowing, and then deploy the same kind of reasoning that he is now using in his arguments, and thereby arrive at his opinion? Or did Lewis start with an incommunicable insight that told him what his initial position was, and then figure out lots of clever arguments for it?

    3) The distinction between rationality and irrationality should not be confused with the distinction between System 2 and System 1 (deliberative and perceptual judgments). What evidence do you have that the kind of reasoning used in philosophical papers is more reliable than a feeling of incommunicable insight?

    4) More reliable at doing what, exactly? How do you tell whether a philosophical position is right or wrong? Suppose David Lewis is right about everything. What harm will come to Inwagen for his obstinacy? Wrong experimental predictions? Inwagen builds a toaster oven, and it doesn’t work? What bad thing happens to him, exactly?

    5) How does anyone know that David Lewis is highly intelligent? IQ test?

    Now, I don’t mean this as quite as severe a criticism of the subject matter of analytic philosophy as it may sound. I have some idea of what awful real-world penalties might befall me, in my profession, if I came to a poor understanding of Newcomb’s Problem. But I do have a pretty severe criticism of a common methodology in analytic philosophy, which is to deploy arguments and visualizations and scenarios with the intent of pumping the other’s intuition and thereby communicating the feeling of incommunicable insight. This is like deliberately sneezing on someone when you have a cold. If you have an incommunicable insight, it means something is wrong – there’s something you don’t understand about your own psychology, how your own mind is analyzing the problem. The business of trying to pump intuitions just endlessly replicates the problem, which almost always turns out to lie in the shape of the intuitions themselves.

    Inwagen’s mistake lies in trusting his own intuitions when he doesn’t know where they come from – when, even by Inwagen’s own lights, he has not resolved the mystery, and cannot possibly be finished with the problem.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, 1) Philosophers are to be praised for explicitly considering these issues; others usually ignore them. 2) I don’t know Lewis’s intellectual history. 3) Deliberative reasoning need not be more reliable, but the point is what grounds do you have for thinking your hidden reasoning is better, if your visible stuff is worse. 4) They believed they were talking about something real, that seems enough to me. 5) These others believed Lewis was smarter, that seems enough.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin, the problem here goes deeper than willful stubbornness, and does not originate with willful stubbornness, which is why it’d be harder to find an example of a similar confession in physics. (The closest thing I can think of is Einstein’s insistence that God does not play dice, and when you think about Everett, Einstein was more right than wrong.)

    And, you can’t repair the deeper problem by trusting someone else’s insights, or even their arguments.

    Suppose one group of analytic philosophers feels that they ought to take only Box B in Newcomb’s Problem, and another group feels that it can only be rational to take both boxes. Starting to argue over how many boxes you should *really* take is exactly the wrong reaction. It’s obvious enough why human beings would do that – there’s two obvious sides and two obvious groups and an obvious fight to be fought. But it’s still the wrong approach. The problem is somewhere in the shape of human intuitions. Victory is when you can see how human intuitions create the problem, and then the apparent confusion goes away.

    Suppose you professed that possible worlds existed, because David Lewis said so. What would you really know? What would you really understand? Would you feel less confused? Would the subject seem less mysterious? Mind you, I am not saying the answer is wrong – I am saying that, even if it happened to be true, you would not be finished. You could not declare victory and stop. You would not be done, because you would still feel confused.

    Feeling that your own incommunicable insight is more trustworthy than someone else’s is not a reason to declare victory. How can you declare victory over an intuition you don’t understand? Even if you argue, and win the argument, you won’t know any more than you did when you started out. Even if your intuition is *true*, you wouldn’t have helped yourself by arguing. If there was no one to argue with, if everyone agreed with you, you would have just as much work left to do before the mists of confusion blew away within your own mind.

    One problem with saying that Inwagen ought to adopt Lewis’s statement is that if Lewis cannot explain to Inwagen how Inwagen’s own intuitions work, and thereby cause Inwagen to stop seeing the problem as mysterious, then Lewis isn’t finished with his job either. Unless Lewis knows exactly how Inwagen’s mind is operating, and Inwagen is just too stubborn to listen, which would be a different problem. The point is, I don’t necessarily trust that Lewis is completely done with his job, either, if Lewis can’t explain to Inwagen what it is about the shape of human psychology that, acting on this problem, produces Inwagen’s apparent “incommunicable insight”.

    An even larger problem with saying that Inwagen should adopt Lewis’s viewpoint is that it makes it into a people-fight, an argument over whether “free will” is or isn’t incompatible with “determinism”. It makes you think that progress is winning the argument, rather than unweaving the question. It makes you think there are sides, rather than a confusion.

    Dennett, in “Breaking the Spell”, points out that while many religious assertions are very hard to believe, it is easy for people to believe that they *ought* to believe them. Dennett terms this “belief in belief”, and suggests that much religious belief is actually religious profession. Suppose Inwagen were to profess that free will is compatible with determinism, because David Lewis said so. What progress has been made? I used to believe that light was made of waves, because physicists told me so. But since it turned out that I had no idea what they meant by “wave”, my profession was completely useless, and it is questionable whether there was any sense in which my belief, or rather profession, could be described as “true”.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, yes, even if they accepted that Lewis was more likely to be right than either if them, their job is far from done. And it might be best for them to explore the opinions that they were initially inclined toward, even if they think Lewis was more likely to be right. The question of what your best estimate should be at the moment is not at all the question of which ideas should be explored when by whom, and when anyone is “done.” And you should always be uncomfortable when all you have is some intuitions you can’t seem to articulate.

  • http://profile.typekey.com/PhillipBricker/ Phillip Bricker

    I received an e-mail from Robin Hanson with subject line: “you are featured today on OvercomingBias.com.” Caricatured, is more like it. I certainly don’t express my disagreement with Lewis over the nature of actuality as “just because, that’s why.” In the paper cited (on my website), I spend over 16,000 words defending my view, presenting and evaluating arguments and counterarguments. An “epistemic chauvinist”, as I use the term, does not abrogate her obligation to consider all the evidence, all the arguments. But at the end of the day, as all philosophers know, there will be fundamental assumptions that cannot be proven, and that other philosophers, equally “smart” and “rational”, do not accept. There are three responses: (1) withhold all belief in philosophical theses; (2) believe, but deny that one is justified in believing (a psychological impossibility for me); or (3) believe, and believe one is justified in believing, even though one has no knockdown argument against one’s opponent, even though one’s belief rests on assumptions that other philosophers reject. As an epistemic chauvinist, I opt for response (3). That, of course, is not the end of the story. One still needs an account of “justification” in these matters, and of what role (if any) intuition and insight can play. And, yes, one still needs an account of how so many other “smart” philosophers can be getting it wrong.

    By the way, with respect to epistemic chauvinism, I am in full agreement with David Lewis. He often used the example of alternative logics. A philosopher who rejects the law of non-contradiction cannot be swayed by rational argument: it does no good to catch them in a contradiction! But Lewis nonetheless believed, and thought he was justified in believing, in classical logic. Examples could be endlessly multiplied. (Newcomb’s problem is another favorite of Lewis’s: he was a committed 2-boxer, but also convinced that no “argument” could convince a diehard 1-boxer to change their view.)

    If claiming to have justified belief in one’s philosophical view, even though one has no “knockdown” arguments against opposing views, and no non-question-begging ways to support (much less prove) one’s fundamental assumptions, is properly called “bias”, so be it. But then distinguish. To hold a belief, and claim to be justified in that belief, and yet refuse to examine the evidence or arguments against the belief is bias of an objectionable sort. To claim to be justified in a belief after examining all the evidence and arguments, in spite of being unable to prove that belief to the satisfaction of other smart people who have also examined the same evidence and arguments, is not an objectionable form of bias – or not obviously so.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Philip, sorry you felt caricatured, and thank you for engaging. You speak of “being unable to prove that belief” and of having “no `knockdown’ arguments against opposing views,” but such situations are not all the same. In your situation you seemed to grant, as I said, that “Lewis is smarter and the usual arguments favor Lewis’s side.”

    Surely you would be more justified in your disagreement if you thought you were smarter, the better philosopher, more familiar with this issue, with a very widely and strongly accepted principle straightforwardly supporting your position. So if instead these considerations favor Lewis, you must be less justified.

    Unless you want to argue that you are always justified in disagreeing when no proof has been found either way, considerations like these must be relevant. If these considerations weigh against you, what considerations do you see that weigh in your favor?

  • http://profile.typekey.com/PhillipBricker/ Phillip Bricker

    I did not grant that “Lewis is smarter” (though I am happy to do so) or “that the usual arguments favor his side.” Some familiar arguments favor his side, some favor mine, some go against both of our views. And I do think (and argue in the paper) that widely and strongly accepted principles strightforwardly support my view over his: for it is widely accepted that, if merely possible objects exist (or have any sort of being), then they differ in ontological status from actual objects. That principle is incompatible with Lewis’s view. Perhaps you are not aware that only a tiny minority of philosophers accept Lewis’s views on the existence and nature of possible worlds.

    What I find strange about your comments is the weight you place on appeals to authority, and worse, appeals to “smartness” (whatever that is). Since philosophical authorities, even very smart ones, often disagree, appealing to authority in philosophy is demonstrably a bad policy. That could be a reason to accept my response (1) above, and be agnostic about (almost) all matters of fundamental metaphysics (or to be relativist, or to deny that such statements have truth values). But if justification is largely an internal affair (as I believe), then one can find oneself in a position of believing one is justified in one’s philosophical views, even knowing other philosophers disagree. Their being “smart” no more requires you to withhold belief in this case, then if a “smart” person told you not to believe what you plainly see before your eyes. In both cases, what they say has some weight; but it may be overshadowed by the evidence of your own senses, or, in the philosophical case, the evidence of your own thought.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Philip, perhaps I did not appreciate the full context of your “epistemic chauvinist” declaration. You accepted that the usual knowledge criteria did not give you knowledge of your “actual” status, and you then seemed to simply declare that you could believe anyway. That seemed to me a bald and unjustified stance.

    Now you point out that the vast majority of philosophers agree with your conclusion, and I’ll accept this as meeting my challenge, being a substantial consideration that weighs in your favor.

    I’ll continue, however, to argue that you should consider how many people, how smart and well read on the topic, think what, when deciding if you are justified in disagreeing with them. If you disagree then someone has made an error, and you must try to estimate who is more likely to have made an error.

    The “evidence of your own thought” would have to be unusually strong to overrule such considerations. If you were reliable in only overruling such considerations when you had especially strong evidence, then others would reasonably change their minds when facing such unusual insistence on your part, and you would no longer disagree. But clearly humans are biased, so that usually when someone prefers the evidence of their own thought, it is due to overconfidence and not especially powerful private evidence. So the question is how sure can you be that you are not expressing the same usual overconfidence?

    In this case you can reasonably say that it is more likely that one very smart person made an error than that the vast majority of philosophers have made an error.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Bricker, my main problem is with “The light of reason, I simply conclude, shines on me and not on them.”

    My profession is Artificial Intelligence, and it is never far from my mind that cognition is a causal phenomenon. You say: “But at the end of the day, as all philosophers know, there will be fundamental assumptions that cannot be proven, and that other philosophers, equally “smart” and “rational”, do not accept.” I don’t accept this principle myself. But even if it were true, these “fundamental assumptions” are not outside the web of causality that wends through physics and biology and psychology. These “fundamental assumptions” must be represented somewhere in your brain.

    If I try taking the statement about “the light of reason” literally, and drawing a little causal network of it, it might look like this:

    [Light of reason] -> [Bricker] -> [Opinion about possible worlds.]

    If “the light of reason” is capable of affecting your opinions (and certain more obviously physical variables, such as the movement of your fingers on the keyboard) then, as an AI researcher, I would like to know what this light of reason is, and how to shine it on an AI, too.

    Cognition is a causal phenomenon. A difference of cognition implies some difference of computation. You and Lewis certainly have genetic differences, or perhaps differences of brain development; you are different human beings. You might even be able to attribute your different opinions about possible worlds to some brute difference of cognitive processing – though this I doubt very highly. My question is, in what sense, and by what criterion, you call this difference “The light of reason.” Or if I change the viewpoint to that of “fundamental assumptions”, then either (1) you have adopted your fundamental assumptions in the light of some higher criterion, reason, which makes these fundamental assumptions arguable and not really “fundamental”; or (2) these fundamental assumptions are not judged under any higher criterion, in which case you are not justified in referring to your fundamental assumptions as a “light of reason” that shines on you and not others.

  • http://amethodnotaposition.blogspot.com Matthew

    Eliezer,

    What advances has AI made into how computation results in subjective awareness of perception, awareness of thinking, etc.?

    Certainly there is an assumption on many people’s part that all these things are nothing more than substrate-neutral computation. But others disagree, such as Chalmers. I’m curious what light AI research has shed on this subject.

  • http://profile.typekey.com/PhillipBricker/ Phillip Bricker

    Robin, For what it’s worth, here’s a brief summary of the dialectic. On my view, my knowledge that I am actual comes directly from the indexical analysis I give of the concept of actuality. That is not baldly stated or assumed. But then, in responding to Lewis’s argument, I have to say why my knowledge isn’t defeated by merely possible people who also claim to know that they are actual. Here I argue that actuality is what I call a perspectival concept, and that knowing that one is actual is a matter not only of having the right concepts (and evidence), but of having the right perspective. Finally, I note (what some would take as an objection) that this solution requires adopting a form of epistemic chauvinism. But I don’t take this to be an objection because, as I say in the quote you give, I am committed to epistemic chauvinism in any case with respect to a priori knowledge generally.

    I did not say that the vast majority of philosophers agree with my view about possible worlds (they don’t), just that they agree with the principle about actuality I gave that is compatible with my view and not with Lewis’s.

    I agree that there is a risk of being overly confident. But there is also a risk of being overly timid, of not trusting what one sees before one’s eyes, or one’s mind’s eye. We do not disagree substantially as to the various factors involved, just as to how they should be weighed.

    Eliezer, The “light of reason”, of course, is a term of art and not to be taken literally. I agree that the fundamental assumptions I accept, the unprovable axioms of my theories, are represented in my brain, and that some computational process occurs when I contemplate them, understand them, and evaluate them as true. It is not my business to know what is going on at the level of computation – I can’t help you there. And, of course, I don’t think what is going on is that “they are judged under a higher criterion” – that way lies infinite regress. I think what is going on in the case of philosophical or metaphysical assumptions is not substantially different from what is going on in the case of mathematical assumptions, say, the axiom of mathematical induction of Peano arithmetic, or the axiom of choice of ZF set theory. If you also hold in those cases that we are never justified in believing the axioms true, then we are far apart indeed.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Philip, on reflection your position seems much less tenable than Peter van Inwagen’s position, and “bald” isn’t too strong a word. Peter admits that his easily communicated evidence is the same, but he justifies his disagreement by referring to his hard to communciate evidence, the sum of all his experience and subsconsious intuition. You, in contrast, admit that *all* of your evidence, no matter how easy or hard to communicate, is exactly identical, but nevertheless you are right and your epistemic duplicates are wrong. This seems quite a bald and unsupportable claim.

    Also, while yes it is possible to be too timid, surely on average overconfidence is by far the most common error, especially among philosophers.

  • Pingback: Overcoming Bias : Tegmark’s Vast Math