Richard Chappell has a couple of recent posts on the rationality of disagreement. As this fave topic of mine appears rarely in the blogsphere, let me not miss this opportunity to discuss it.
In response to the essential question “why exactly should I believe I am right and you are wrong,” Richard at least sometimes endorses the answer “I’m just lucky.” This puzzled me; on what basis could you conclude it is you and not the other person who has made a key mistake? But talking privately with Richard, I now understand that he focuses on what he calls “fundamental” disagreement, where all parties are confident they share the same info and have made no analysis mistakes.
In contrast, my focus is on cases where parties assume they would agree if they shared the same info and analysis steps. These are just very different issues, I think. Unfortunately, they appear to be more related than they are, because of a key ambiguity in what we mean by “belief.” Many common versions of this concept do not “carve nature at the relevant joints.” Let me explain.
Every decision we make is influenced by a mess of tangled influences that can defy easy classification. But one important distinction, I think, is between (A) influences that come most directly from inside of us, i.e., from who we are, and (B) influences that come most directly from outside of us. (Yes, of course, indirectly each influence can come from everywhere.) Among outside influences, we can also usefully distinguish between (B1) influences which we intend to track the particular outside things that we are reasoning about, from (B2) influences that come from rather unrelated sources.
For example, our attitude toward rain soon might be influenced by (A) our dark personality, that makes us expect dark things, and from (B1) seeing dark clouds, which is closely connected to the processes that make rain. Our attitude toward rain might also be influenced by (B2) broad social pressures to make weather forecasts match the emotional mood of our associates, even when this has little relation to if there will be rain.
Differing attitudes between people on rain soon is mainly problematic regarding (B1) aspects of our mental attitudes which we intend to have track that rain. Yes of course if we are different inside, and are ok with remaining different in such ways, then it is ok for our decisions to be influenced by such differences. But such divergence is not so ok regarding the aspects of our minds that we intend to track things outside our minds.
Imagine that two minds intend for certain aspects of their mental states to track the same outside object, but then they find consistent or predictable differences between their designated mental aspects. In this case these two minds may suspect that their intentions have failed. That is, their disagreement may be evidence suggesting that for at least one of them other influences have contaminated mental aspects that person had intended would just track that outside object.
This is to me the interesting question in rationality of disagreement; how do we best help our minds to track the world outside us in the face of apparent disagreements? This is just a very different question from what sort of internal mental differences we are comfortable with having and acknowledging.
Unfortunately most discussion about “beliefs” and “opinions” are ambiguous regarding whether those who hold such things intend for them to just be mental aspects that track outside objects, or whether such things are intended to also reflect and express key internal differences. Do you want your “belief” in rain to just track the chance it will rain, or do you also want it to reflect your optimism toward life, your social independence, etc.? Until one makes more clear what mental aspects exactly are referred to by the word “belief”, it seem very hard to answer such questions.
This ambiguity also clouds our standard formal theories. Let me explain. In standard expected-utility decision theory, the two big influences on actions are probabilities and utilities, with probabilities coming from a min-info “prior” plus context-dependent info. Most econ models of decision making assume that all decision makers use expected utility and have the same prior. For example, agents might start with the same prior, get differing info about rain, take actions based on their differing info and values, and then change their beliefs about rain after seeing the actions of others. In such models, info and thus probability is (B1) what comes from outside agents to influence their decisions, while utility (A) comes from inside. Each probability is designed to be influenced only by the thing it is “about,” minimizing influence from (A) internal mental features or (B2) unrelated outside sources.
In philosophy, however, it is common to talk about the possibility that different people have differing priors. Also, for every set of consistent decisions one could make, there are an infinite number of different pairs of probabilities and utilities that produce those decisions. So one can actually model any situation with several expected-utility folks making decisions as either one with common priors or with uncommon priors.
Thus in contrast to the practice of most economists, philosophers’ use of “belief” (and “probability” and “prior”) confuses or mixes (A) internal and (B) external sources of our mental states. Because of this, it seems pointless for me to argue with philosophers about whether rational priors are common, or whether one can reasonably have differing “beliefs” given the same info and no analysis mistakes. We would do better to negotiate clearer language to talk about the parts of our mental states that we intend to track what our decisions are about.
Since I’m an economist, I’m comfortable with the usual econ habit of using “probability” to denote such outside influences intended to track the objects of our reasoning. (Such usage basically defines priors to common.) But I’m willing to cede words like “probability”, “belief” or “opinion” to other purposes, if other important connotations need to be considered.
However, somewhere in our lexicon for discussing mental states we need words to refer to something like what econ models usually mean by “probabilities”, i.e., aspects of our mental states that we intend to track the objects of our reasoning, and to be minimally influenced by other aspects of our mental states.
(Of course all this can be applied to “beliefs” about our own minds, if we consider influences coming from our minds as if it were something outside.)
(Of course all this can be applied to “beliefs” about our own minds, if we consider influences coming from our minds as if it were something outside, from other influences.)
This sentence literally doesn't parse. Here's my reconstruction:
Say it's a belief about some aspect of your mind. Then the parts of your mind responsible for A may be different from the aspect you're trying to grasp (B1). But I would definitely label any spurious influences due to non-B1 parts of my mind as being A. Unless the intent in such cases is to adopt the convention that A is only about the things that are idiosyncratic to us; that if there's some near-universal fact about human minds, then it should be called B2 instead. I guess that would be fine.
I also felt like Robin implied that differences in A are acceptable (or at least irreconcilable). But A-differences aren't necessarily benign. There are defects in our (individual and shared) nature that disturb me.
mjgeddes:similarity + complexity = ? (new type of information theory tracking internal beliefs?)
How about:similarity + complexity = channel theory (an existing form of information theory tracking channel components)