17 Comments

Nicholas, I hope I will eventually figure out what you mean by "reason based epistemology." You seem to imply that it, in contrast to stark Bayesianism, is not relativist.

Expand full comment

The reason it’s not simply a terminological issue is precisely because of the criticism, adverted to in Nick B’s (c), that Bayesians frequently want to make of reason based epistemology (and I agree that it is a significant point to make). But you can’t have it both ways. Either you abjure reason based epistemology altogether, put up with whatever account of rational belief you can manage on that basis, and can advance the criticism as a problem your epistemology avoids, or admit elements of reason based epistemology and accept that the criticism is a problem that you too face. I think the choice is stark here, and you can either be a Normative Bayesian or you can be a reason based epistemologist (some of whom, like me, think that insights from formal methods are important but require carefully thought out application). I might be wrong about this, but I think you two think there’s a position in between. Currently, Normative Bayesianism is a relativist position because there is not a solution to the problem of the priors, so if you want to be a non-relativist Normative Bayesian that problem has to be solved Bayesianly. Of course, Aumann’s and Robin’s papers are interesting steps in that direction, so I’m not saying that the non-relativist Normative Bayesian programme is a degenerating one.

Expand full comment

Perhaps not surprisingly, all three of Nick B.'s characteristic features of Bayesianism describe me well. And perhaps I should speak of disagreement as unreasonable, rather than irrational.

Expand full comment

I think there are different definitions or understandings of "Bayesianism" floating around in the literature. Only in some of these (which we might call pure subjective Bayesianism) are there no other rationality constraint on belief than coherence and updating rules. (Colin Howson is an even more hardcore Bayesian, refusing even to condone diachronic belief constaints, wishing to claim the remainder as pure logic.) But the term can also be used for the view (which I hold) that there are additional constraints on rational belief other than coherence and Bayesian updating. I sometimes use the term "rational" for beliefs that satisfy at least the formal constraints, and the term "reasonable" for beliefs that also satisfy the additional material constraints. On this usage, a belief could avoid being irrational while still being unreasonable.

On this weaker form of Bayesianism it might be less clear what is distinctive about it and how it is different from "reason-based epistemology". I don't have a ready answer for that, and I have to say the question does not interest me hugely since it seems mainly terminological. But maybe we could point to some characteristic features: (a) belief in the fruitfulness of the formal Bayesian framework; (b) belief that prior probabilities are essential and cannot be ignored, e.g. in philosophy of science; (c) belief that the whole enterprise of formulating what it means to "accept" a proposition, and criteria for when we should accept, reject, or suspend judgement about a proposision is creating a lot of problems for itself - the so-called lottery paradox etc. --, many of which can be naturally overcome if we instead focus on credences and assigning degrees of belief (probabilities) to propositions. At least, these three things tend to be believed by people who call themselves Bayesians but seemingly not by many others.

Expand full comment

Nick B: Regarding your other remarks: I agree with the thought that lies behind them and I would see that thought as offering reasons why we might accept that truths about ideal Bayesian believers should be taken into account when thinking about the rationality of belief for persons. But probably we should have a separate post about the relation of Bayesian believers to persons.

Conditioning on the information about the Priors of other evolved agents is, of course, purely Bayesian. However, in addition to the idealising assumption (that ideal Bayesian believers get it right) which Normative Bayesianism is entitled to (at least in advancing itself), Robin’s (2) rests on a further assumption that it is *information* about *evolved* agents in a *material world* with *causal processes*, and that amounts to assuming realism, naturalism and various pieces of methodology from philosophy of science. Of course, I’m not opposed to those assumptions, but their justification steps outside Bayesian rationality.

Expand full comment

Robin: I'm not taking a position, just clarifying what distinctive positions there are to take. I don't think Normative Bayesianism is true, but I think it's interesting to press it as far as we can to find out just what it can and cannot model of rationality. Hence my interest in your results. However, pressing it as far as we can requires not muddling it up with other positions. But as I said, the problems we are interested are probably not exactly the same.

Example of foundational metaphysical question involving a kind of origin dispute: idealism v realism. Idealism says that there is no material world, there are only minds and mental states. In that case the origin of our Priors could not be evolution, since there are no bodies to evolve.

James: to answer another aspect of your question: also distinguish the ideal Bayesian believers of Aumann's and Robin's models from actual people. Aumann gives the story of Bayesian believers repeatedly exchanging their credence in a proposition, updating their credence in the light of their knowledge of the others credence, and then once again exchanging their updated credence in the proposition, and so on. The reiterated utterances and updatings lead to their credences converging. I don't know if it would require infinite steps to converge--I suspect the proof would be a limit argument, in which case it would-- but they are ideal Bayesian believers and this would be an unobjectionable idealisation.

Expand full comment

James, one can have a normative standard that one tries to live up to, even if one doesn't have an exact algorithm that guarantees zero deviation from the standard. Anytime you can identify a systematic deviation between your belief and the normative standard, you try to change your beliefs to reduce that deviation.

Expand full comment

This seems as good a place as any to continue with what I was trying to pursue...

"Unless Bayesians think the causal process that produced their prior was special, they will have common priors"

How will they construct this common prior? Without a constructive method, surely they will more likely accept that they have different priors and get on with life, even if differing priors means that they cannot be "rational" by your definition.

Expand full comment

Nicholas, you say "Bayesianism" is "the entirety of the rationality of belief is modeled by probability theory." I can endorse the position that most rationality constraints are *expressible* in terms of probability theory, but not the position you seem to be taking, that all rationality constraints *reduce* to probability theory. I look forward to hearing you explain how "foundational metaphysical questions are precisely a kind of origin dispute."

Expand full comment

Nick B: For your first suggestion, I don’t know. Do the Bayesians in Aumann’s original paper have common knowledge of their ideal Bayesian believerness, or just common knowledge of the posterior proposition and each know or commonly know that they had common priors? I should have said Bayesianly rational Priors in MBA, for the reasons mentioned in my reply to Robin. I think what I say there explains why, whilst I agree with what you say in your second point, I don’t think such rational principles count as part of Normative Bayesianism. Of course, the picture I gave of an ideal Bayesian believer is something of an approximation just because the third clause is frequently significantly modified by Bayesians (e.g. Jeffrey conditionalisation as opposed to pure Bayes theorem), but the loose description, that the entirety of the rationality of belief is to be modelled in some way or another by probability theory, captures what I take to be distinctive about Bayesian epistemology. Otherwise, the problem of old knowledge for hypothesis confirmation, for example, needn’t be a problem. I agree with your final paragraph and a number of points you make along the way. Might discuss them more tomorrow.

Expand full comment

Robin: To prove that there was a unique Bayesianly rational prior would prove that Bayesianism rules out reasonable disagreement so I can’t see why it’s a distraction. In the earlier discussion of reasonable disagreement you and several others seemed to claim that *Bayesianism* rules out reasonable disagreement. Of course, you can use the word ‘Bayesianism’ in any way you please. However, if Bayesianism is supposed to be a distinctive position in epistemology, a position distinct from reason based epistemology, it can’t just mean using a lot of probability theory and retreating to other notions of rationality when the theory runs out. It must be that the entirety of the rationality of belief is modelled by probability theory (and that is why I included the second conjunct in my definition of Normative Bayesianism). So if we are interested in contrasting Normative Bayesianism with reason based epistemology we have to be careful about which things are achieved by Bayesian standards, where ‘Bayesianism’ means something like the definition I gave in this post, and which achievements depend also on independent notions of rationality.

For the sake of discussion, I’ve framed the normative issue in terms of an ideal believer. If, in addition to Bayesian requirements, an ideal believer is defined in terms of notions of rationality that are independent of Bayesian requirements then they are not a pure Bayesian believer and conclusions based on such a believer are not the outcome of Normative Bayesianism, but the outcome of Normative Bayesianism plus reason based epistemology. The problem with this hybrid is that it is a position within, not distinct from, reason based epistemology, a position in which Bayesianism is a methodology within reason based epistemology. Consequently, if it is Normative Bayesianism that rules out reasonable disagreement, where Normative Bayesianism is supposed to be a distinctive position in epistemology, one of WBA or SBA is going to have to be true.

Now turning to your framing of the problem: As far as I can make out (and subject to correction, since I admit I haven’t re-read it since I read it last summer), the ideal believers of ‘Uncommon Priors Require Origin Disputes’ are not pure Bayesian believers, since you appeal to other notions of rationality when you want to say that we cannot reasonably believe that our prior was made special. Consequently, the conclusion of your paper, which you summarise in your post as ‘unless Bayesians think the causal process that produced their prior was special, they will have common priors’, is not justified by the content of the paper, if by Bayesianism you intend a normative position distinct from reason based epistemology.

You might say that you don’t care about whether it is Normative Bayesianism that rules out disagreement, you just want to argue against the reasonableness of disagreement. That’s fine, of course, but whether it is Normative Bayesianism alone or whether other constraints are needed to get you there will have an impact further down the line —in arguments over the reasonableness of, for example, the dispute between Van Inwagen and Lewis. Foundational metaphysical questions are precisely a kind of origin dispute, and if you are now admitting that origin disputes are not settled by Bayesianism but by reason based epistemology, you have less grounds on which to say Van Inwagen cannot reasonably disagree unless he thinks himself superior to Lewis.

Expand full comment

Pdf, merely knowing more about yourself can't be enough; you must at least know better than average things about yourself. And if the other person was reasonable and also knew you were better than average they would not disagree.

Expand full comment

Hal, the difference in real life is that we know a lot more about A (ourselves) than B. So, in the situation where they're both other people, and you know a lot about A and nothing about B (except that the hold some other position), does that justify you in thinking that A is more likely right? Or do you also have to know more about B?

Expand full comment

Here's how I see the crux of the Modesty Argument. Suppose you know that A and B disagree, but have no other information than that. Based only on that information, who is more likely to be right?

Now, add the fact that you are A, but again no other information. Now who is likely to be right?

If the mere fact that you happen to be A makes you think A is now more likely to be right, that is a violation of indexical independence and seems pretty egotistical and unreasonable. If knowing which is you leaves you in the dark about who is right, that is modesty.

Expand full comment

Nick B, yes, when I speak of having a common "prior" I more precisely have in mind a posterior conditioned on knowing who has what belief tendencies and the basics of the causal origins of such tendencies. And yes, anyone who considers a set of beliefs to be equally valid faces the question of why an average over that set isn't preferable to a random element of that set.

Expand full comment

NickS, some further ideas on defining the rationality of Priors in terms of satisfying MBA... (sorry about the lenght of this comment)

First, the three claims need to be modified: in order for the Ideal Bayesians not to disagree, they would presumably also have to have commmon knowledge that they are both (honest) Ideal Bayesians. (Otherwise you could have two people who both happen to be Ideal Bayesians but each of them happens to have evidence misleadingly suggesting that the other is not a Ideal Bayesian - and then there is no reason to suppose they should agree.)

Second, there are presumably additional rationality (or reasonablness) constraints on rational (or reasonable) Priors. This is in my view plausible on independent grounds. It also seems to increase the prospects that a unique set of Rational Priors would picked out, rather than serveral different sets, each of which contains Priors that would agree with each other but not with the Priors of the other sets.

Third, we might be able to strenghen MBA by weakening the antecedent. Suppose I know that you are a slightly non-Ideal Bayesian, but that you have sufficient evidence to overcome the non-Ideal aspects. Your posterior after conditionalizing on this evidence is not significantly different from what an Ideal Bayesian's posterior would be. Then it would seem we could extend the agreement result to many of these non-Ideal situations.

Fourth, there is the question, however, whether we really want to assume that there is a set of perfectly Ideal Priors, and then all the other Priors are non-Ideal. Perhaps there are some objective constaints that a Prior has to satisfy to be rational. Suppose you have such a Prior. How should you view the other Priors that also satisfy these constraints but that differ from your own Prior?

Suppose you thought that they were all equally valid, or deserving of equal consideration. Does not that mean that you ought to form some kind of average of them, Prior*? But could one not then say that Prior* was more rational than the other Priors, constradicting the assumption that there was a (non-singleton) set of rational Priors? For presumably it is not rational to hold on to a certain Prior if you know another Prior that is more rational.

Suppose instead that you think that you think you should hold on to your own Prior instead of switching to another (such as the average Prior, Prior*). You might reason that objective constraints exist that rule out many Priors as irrational, but among the remaining Priors there are only subjective criteria for choosing or having one rather than another. (This is the view that I am leaning towards.) On this view, the way we might obtain some no-disagreement results is not by considering the existence of various Priors as abstract mathematical objects. Instead, it is only by finding that some Priors have been physically instantiated in some real agent that we might get evidence that could give us grounds for assigning a posterior probability to some proposition in a way that leads to agreement. The fact that evolution actually produced an organism with a particular Prior can be very relevant information about this world we are living in, but not the mere fact that there exists an abstract mathematical object, a particular mapping from some sigma algebra into the unit interval of reals.

If this last thought is correct, then it might indeed be a constraint on all rational (or reasonable) Priors that they are such that, when conditionalized on evidence about the existence of agents with other Priors, and on evidence suggesting that these other agents were produced by causal processes relevantly similar to the causal processes that produced you, they deliver a posterior probability assignment to propositions that is similar (or in some cases identical) to the posterior probability assignment that these other Priors give to those propositions.

And this is, I think, the gist of what Robin is referring to under (2) of his comment above, except that what NickS and I call "Prior" he calls "pre-Prior", and what Robin calls "Prior" I would call "Posterior, conditionalized on certain facts about the existence of agents with other Priors and about the causal processes that produced you and them".

Expand full comment