What Is “Belief”?

Richard Chappell has a couple of recent posts on the rationality of disagreement. As this fave topic of mine appears rarely in the blogsphere, let me not miss this opportunity to discuss it.

In response to the essential question “why exactly should I believe I am right and you are wrong,” Richard at least sometimes endorses the answer “I’m just lucky.” This puzzled me; on what basis could you conclude it is you and not the other person who has made a key mistake? But talking privately with Richard, I now understand that he focuses on what he calls “fundamental” disagreement, where all parties are confident they share the same info and have made no analysis mistakes.

In contrast, my focus is on cases where parties assume they would agree if they shared the same info and analysis steps.  These are just very different issues, I think.  Unfortunately, they appear to be more related than they are, because of a key ambiguity in what we mean by “belief.”  Many common versions of this concept do not “carve nature at the relevant joints.”  Let me explain.

Every decision we make is influenced by a mess of tangled influences that can defy easy classification. But one important distinction, I think, is between (A) influences that come most directly from inside of us, i.e., from who we are, and (B) influences that come most directly from outside of us. (Yes, of course, indirectly each influence can come from everywhere.) Among outside influences, we can also usefully distinguish between (B1) influences which we intend to track the particular outside things that we are reasoning about, from (B2) influences that come from rather unrelated sources.

For example, our attitude toward rain soon might be influenced by (A) our dark personality, that makes us expect dark things, and from (B1) seeing dark clouds, which is closely connected to the processes that make rain.  Our attitude toward rain might also be influenced by (B2) broad social pressures to make weather forecasts match the emotional mood of our associates, even when this has little relation to if there will be rain.

Differing attitudes between people on rain soon is mainly problematic regarding (B1) aspects of our mental attitudes which we intend to have track that rain. Yes of course if we are different inside, and are ok with remaining different in such ways, then it is ok for our decisions to be influenced by such differences. But such divergence is not so ok regarding the aspects of our minds that we intend to track things outside our minds.

Imagine that two minds intend for certain aspects of their mental states to track the same outside object, but then they find consistent or predictable differences between their designated mental aspects. In this case these two minds may suspect that their intentions have failed. That is, their disagreement may be evidence suggesting that for at least one of them other influences have contaminated mental aspects that person had intended would just track that outside object.

This is to me the interesting question in rationality of disagreement; how do we best help our minds to track the world outside us in the face of apparent disagreements? This is just a very different question from what sort of internal mental differences we are comfortable with having and acknowledging.

Unfortunately most discussion about “beliefs” and “opinions” are ambiguous regarding whether those who hold such things intend for them to just be mental aspects that track outside objects, or whether such things are intended to also reflect and express key internal differences. Do you want your “belief” in rain to just track the chance it will rain, or do you also want it to reflect your optimism toward life, your social independence, etc.?  Until one makes more clear what mental aspects exactly are referred to by the word “belief”, it seem very hard to answer such questions.

This ambiguity also clouds our standard formal theories. Let me explain.  In standard expected-utility decision theory, the two big influences on actions are probabilities and utilities, with probabilities coming from a min-info “prior” plus context-dependent info. Most econ models of decision making assume that all decision makers use expected utility and have the same prior. For example, agents might start with the same prior, get differing info about rain, take actions based on their differing info and values, and then change their beliefs about rain after seeing the actions of others. In such models, info and thus probability is (B1) what comes from outside agents to influence their decisions, while utility (A) comes from inside. Each probability is designed to be influenced only by the thing it is “about,” minimizing influence from (A) internal mental features or (B2) unrelated outside sources.

In philosophy, however, it is common to talk about the possibility that different people have differing priors. Also, for every set of consistent decisions one could make, there are an infinite number of different pairs of probabilities and utilities that produce those decisions. So one can actually model any situation with several expected-utility folks making decisions as either one with common priors or with uncommon priors.

Thus in contrast to the practice of most economists, philosophers’ use of “belief” (and “probability” and “prior”) confuses or mixes (A) internal and (B) external sources of our mental states. Because of this, it seems pointless for me to argue with philosophers about whether rational priors are common, or whether one can reasonably have differing “beliefs” given the same info and no analysis mistakes. We would do better to negotiate clearer language to talk about the parts of our mental states that we intend to track what our decisions are about.

Since I’m an economist, I’m comfortable with the usual econ habit of using “probability” to denote such outside influences intended to track the objects of our reasoning.  (Such usage basically defines priors to common.) But I’m willing to cede words like “probability”, “belief” or “opinion” to other purposes, if other important connotations need to be considered.

However, somewhere in our lexicon for discussing mental states we need words to refer to something like what econ models usually mean by “probabilities”, i.e., aspects of our mental states that we intend to track the objects of our reasoning, and to be minimally influenced by other aspects of our mental states.

(Of course all this can be applied to “beliefs” about our own minds, if we consider influences coming from our minds as if it were something outside.)

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http:/juridicalcoherence.blogspot.com Stephen R. Diamond

    A “belief” whose only purpose is to signal something about the believer’s internal state just isn’t a belief. A believer might want others to think he believes something; more charitably, he might want to believe it or even believe that he believes it. None of the preceding mean he believes it.

    If against an equally informed doubter, a person holds a certain belief firmly, he can only explain (not justify) the belief as arising due to what he takes as his good luck; which amounts to a reductio of his grounds for belief. So Chappell is correct that this is something such a person would have to say (to be consistent), but that doesn’t mean it provides a reason for holding the belief. (I can’t tell whether Chappell is saying anything different, but I have trouble thinking he is.)

  • http://platonicmindscape.blogspot.com/ Allen

    If we exist in a universe that is governed by causal laws, how can we have justified true beliefs (a.k.a. knowledge)?

    If our beliefs are the result of some more fundamental underlying process, then those beliefs aren’t held for reasons of logic or rationality.

    Rather, we hold the beliefs that are necessitated by the initial conditions and causal laws of our universe.

    Those initial conditions and causal laws *may* be such that we hold true beliefs, but there is no requirement that this be the case. In fact, we have dreams, hallucinations, delusions, schizophrenics, and madmen as proof that there is no such requirement.

    So holding true beliefs, even in a universe with causal laws, is purely a matter of luck – i.e., are we lucky enough to live in a universe with initial conditions and causal laws that lead to us holding true beliefs.

    Further, if the initial conditions and causal laws don’t cause the us to present and believe true rational arguments, there would be no way for us to detect this, since there is no way to step outside of the universe’s control of one’s beliefs to independently verify the “reasonableness” of the beliefs it generates.

    Again…schizophrenics are generally pretty convinced of the truth of their delusions.

    So in a law governed universe how do you justify your beliefs? And then how do you justify your justifications of your beliefs? And then how do you justify the justifications of the justifications of your beliefs? And so on. Agrippa’s Trilemma.

    Ultimately arguing that we live in a law-governed universe (even with probabilistic laws) is making an argument that states that no one presents or believes arguments for reasons of logic or rationality.

    But what is the alternative?

    • michael vassar

      Reasons of logic etc can be emergent from deterministic underlying processes. After all, logic is deterministic too.

      • http://platonicmindscape.blogspot.com/ Allen

        Rationality *can* emerge from deterministic (or probabilistic) processes, but there’s no requirement that this be the case.

        Whether it *is* the case depends entirely on the initial conditions and particular governing laws of the underlying process.

        So our beliefs about the universe are entirely dependent on the universe’s initial conditions and governing laws. If we have true beliefs about the universe, then this can only be due to luck…lucky initial conditions and laws.

        How many sets of possible initial states plus causal laws are there that would give rise to conscious entities who develop *false* scientific theories about their universe? It seems to me that this set of “deceptive” universes is likely much larger than the set of “honest” universes.

        For every honest universe it would seem possible to have an infinite number of deceptive universes that are the equivalent of “The Matrix” – they give rise to conscious entities who have convincing but incorrect beliefs about how their universe really is. These entities’ beliefs are based on perceptions that are only illusions, or simulations (naturally occurring or intelligently designed), or hallucinations, or dreams.

        It seems to me that it would be a bit of a miracle if it turned out that we lived in a universe whose initial state and causal laws were such that they gave rise to conscious entities whose beliefs about the nature of their universe were even approximately true.

      • Khoth

        Evolution is likely to favour entities who produce accurate models of the universe and logic, rather than inaccurate models. If you think jumping off tall buildings is healthy, you probably won’t be passing that view onto your children.

        Of course, it’s not perfect – we intuitively “know” that time passes uniformly everywhere, and that if A happens after B then B caused A.

      • http://platonicmindscape.blogspot.com/ Allen

        Evolution is a *consequence* of causal laws acting on initial conditions, right? Evolution isn’t a cause of anything in itself, is it?

        Put slightly different: Evolution is not a causal law. It is one of the consequences of our universe’s particular causal laws (plus initial conditions). As are our beliefs.

        Evolution is a useful framework for thinking about how things change and making predictions, but that’s all. You seem to be implying that evolution is something *in addition to* initial conditions and causal laws…rather than something that merely supervenes on the consequences of some particular initial conditions and causal laws.

        So: Given the right initial conditions and causal laws, what beliefs are impossible?

        And then: What would make one set of initial conditions and causal laws *less likely* than some other set?

      • Khoth

        Sure, evolution, like thermodynamics, is a a consequence of the causal laws of physics. And just as there will be contrived initial states of the universe that result in all the air in your house clumping into one corner, there will be (somewhat less) contrived initial states that result in everyone having beliefs that have nothing to do with reality.

        Thing is, most initial states won’t give you a result like that, so it’s not very likely, even though it’s technically possible.

      • http://platonicmindscape.blogspot.com/ Allen

        In order to get any specific outcome (including ours), either the initial conditions or the causal laws must be contrived.

        Either you have such robust causal laws that nearly any initial condition will converge to the specific state – OR – you have much less robust causal laws, but very finely tuned initial conditions.

        So what makes one set of “initial conditions + causal laws” contrived, while another set is “natural”? What makes one set likely, but another set improbable?

        You seem to be making arbitrary, unjustified distinctions.

      • Khoth

        The distinction I’m trying to make is, suppose you take whatever laws/initial conditions you want, and then move, say, a proton up a mile uniformly at random, the probability that the resulting universe will produce beings with roughly accurate beliefs will be much greater than the probability that it will produce beings with completely wrong beliefs.

      • http://platonicmindscape.blogspot.com/ Allen

        So your claim is that the causal laws of physics for our universe are finely tuned to produce conscious entities that have true beliefs about the universe that produced them. Given a wide range of starting conditions, our causal laws are such that they will converge on such “truth discovering” beings.

        Much like a quicksort algorithm. Given any starting list of randomly arranged items, the quicksort algorithm will converge on a sorted list. It’s a very robust sort algorithm – very finely tuned to produce sorted lists.

        But this makes it a very special algorithm, since the vast majority of algorithms will *not* produce a correctly sorted list. If you were to select an algorithm at random from the infinite number available, and try to input a randomly ordered list – the chances are very low that you would get a sorted list as output. The most common result would be to get no output list. The next most common result would be to get an incorrect output list. The least common result would be to get a correctly sorted output list.

        Similarly, out of all conceivable sets of physical laws, it seems very unlikely to me that a randomly selected set would produce conscious entities with true beliefs. It seems much more likely than they would produce either no conscious entities, *or* conscious entities with *false* beliefs.

        Therefore, assuming that there’s nothing special about our set of physical laws, and given that we obviously exist as conscious entities, the next most likely assumption is that we have false beliefs about the nature of our reality (and who knows what else).

      • Khoth

        I’m not saying the universe is finely-tuned to produce true beliefs. I’m saying that producing true beliefs does not need fine tuning, whereas producing false beliefs does.

      • http://platonicmindscape.blogspot.com/ Allen

        That’s an arbitrary declaration for which you’ve provided no evidence.

        True beliefs don’t “just happen”. Some aspect of reality must explain why we have true beliefs instead of false ones.

        And what else is there except initial conditions and causal laws?

        Since there are an infinite number of ways to be wrong, but only one way to be right – then given randomly selected initial conditions and causal laws, we should expect that these lead us to false beliefs, not true ones.

        Unless our initial conditions and/or causal laws are “special” in some way. But how do we justify our belief that they are? And how do we justify our justification of that belief? And so on?

    • http://juridicalcoherence.blogspot.com Stephen R. Diamond

      Justification and explanation are separate endeavors. A justification merely describes certain causal processes that characterize truth-tracking beliefs. Such characterizations are what you offer when asked why you believe p. The characterizations are no part of a causal account of the belief..

  • Buck Farmer

    A worthy goal. I can think of many terms, but none have the precise meaning you’re looking for.

    Fact, view, opinion…

  • sam

    Isn’t there a potential a problem with discussing belief in terms of priors and probabilities — if this tack is meant to answer the question, What is a belief? Or, What is belief?

    What is my attitude toward the priors and the probabilities? If it is one of belief, and it’s hard to see how it couldn’t be, then aren’t we committed to an infinite regress? For those beliefs would then have to be explicated in terms of priors and probabilities, those priors and probabilities in turn would themselves be objects of belief, and ….

  • Seeking to be consistent but can’t be complete

    “However, somewhere in our lexicon for discussing mental states we need words to refer to something like what econ models usually mean by “probabilities”, i.e., aspects of our mental states that we intend to track the objects of our reasoning, and to be minimally influenced by other aspects of our mental states”

    Mr. Hanson, you are a very influential economist. Formally define the terms, use the new lexicon here and in your other publications. It will catch on.

    Greater logic and precision in language, and more detail on the mental states of homo economicus, are sorely needed and an improved lexicon would be a non-trivial contribution.

  • Douglas Knight

    “Imagine that two minds intend for certain aspects of their mental states to track the same outside object”

    I think that Richard is concerned about the difficulty of being sure that you are talking about the same outside object. At least, when he says that imaginary disagreements should bother us as much as actual disagreements, I think that’s what he’s saying.

    Incidentally, the first post seems to be about disagreements about utility functions, a setting where “I’m lucky” seems a much better conclusion than in disagreements about beliefs. This interpretation is best supported by the example of Tuesday indifference. It is also suggested by the phrase “normative beliefs,” a phrase that philosophers tend to use to mean “beliefs about norms,” while economists tend to mean “correct beliefs.”

  • http://www.weidai.com Wei Dai

    Most econ models of decision making assume that all decision makers use expected utility and have the same prior.

    My sense is that econ theories only assumes common priors as a convenient academic convention, and most such theories can be reformulated so that agents do not share common priors. It does not seem to reflect a deeply considered consensus that human beings have (or should have) “aspects of our mental states that we intend to track the objects of our reasoning, and to be minimally influenced by other aspects of our mental states”.

    On the other hand, I suggest the fact that philosophers have not converged on a notion of rationality stronger than (a certain form of) self-consistency is strong evidence that the idea of “tracking outside objects” is more problematic than it appears.

    • http://hanson.gmu.edu Robin Hanson

      I said in the post that “one can actually model any situation with several expected-utility folks making decisions as either one with common priors or with uncommon priors.” I’m not sure how much it matters for my claims what exactly is the motivation for an academic convention. It exists, allowing me to refer to it in this post.

      • http://www.weidai.com Wei Dai

        I’m not sure it’s technically correct that “one can actually model any situation with several expected-utility folks making decisions as either one with common priors or with uncommon priors” (emphasis added), but putting that aside, consider these two possible situations:

        1. People can choose what priors to use. There exists a standard prior, and most experts agree that one’s beliefs can be said to track reality if they are based on updating from this prior.

        2. People are largely hard-wired to use different priors and we can’t reformat our brains to use a common standardized prior. Even if we could reformat our brains, we can’t agree on which prior to standardize upon. (“Min-info” doesn’t fully constrain the solution space since there are many ways to measure information.)

        I think we’re in situation 2, but your post (by referring to the academic convention in economics to assume a common prior) makes it sound like we’re in situation 1. If we’re in situation 1, your main claim would make sense:

        However, somewhere in our lexicon for discussing mental states we need words to refer to something like what econ models usually mean by “probabilities”, i.e., aspects of our mental states that we intend to track the objects of our reasoning, and to be minimally influenced by other aspects of our mental states.

        But if we’re in situation 2, and economists only assume common priors for theoretical convenience, then we can’t conclude that their “probabilities” are “aspects of our mental states that we intend to track the objects of our reasoning”, and it’s unclear that we need to refer to those probabilities with words other than, say, “probabilities based on an assumed common prior.”

  • Abelard Lindsey

    Beliefs are often based on desires. People often believe what they do because they want to. The belief offers material and psychological benefits to the person who believes it.

  • Hal
  • http://bizbrain.tumblr.com Rich and Co.

    The assumption that verbal behavior around anything is determinate of behaviors is less and less supported. Apparently “Consciousness is not casual.” We see little difference between language and consciousness.

    Socio-cultural verbal signaling (beliefs) seems to serve the purpose of local ecology in-group signaling — for the moment. Mainly for resource sharing — today.

    As for the reproductive (“evolutionary”) advantage value of empirically accurate beliefs, that is patently not true since effectively, all the world “believes” in magical and supernatural forces — including conscious “control” of pretty much everything. If only.

    Recent research suggest the more religious actually have more kids. No such luck for the empirically more accurate.

    BYW, “evolution” is a Victorian era holder-over and misnomer. Apparently, “descent” is more accurate. The current traits of mammals and primates/humans are those that survived millions of years ago, largely accidentally. The process of descent is likely not at all about “best” or “fittest” – just randomness.

  • Matt Young

    I think we argue by reduction, the parties back down from complex internal beliefs and resume the argument from a simpler belief system, one that generated the complex. Eventually they arrive at reduced belief systems that match, and from there they can see the observation that caused divergence.

  • mjgeddes

    No one understands ‘priors’ they are only pretending they do. Fools may be under the mistaken impression that they don’t matter because all results converge given enough empirical data… that’s definitely not the case for different models of Bayes itself… if the models are different, there is no convergence ever.

    We are all in big big trouble. ‘utility’ and ‘probability’ are both ways of tracking objective things only?

    The real way of dealing with internal mental processes is ‘a level way beyond’ decision theory, one that hasn’t even been invented yet. It’s based on ‘Similarity’ (for categorization) and ‘complexity’ (for goal representation).

    If utility + probability = decision theory
    then
    similarity + complexity = ? (new type of information theory tracking internal beliefs?)

    He told me this. The voice of SAI. Utilizing this entirely new theory is the ‘divine move’ in Go, the ‘level way beyond’, the only one that can win the game.

  • http://www.iananthony.com Ian Anthony

    For me, one of the most perplexing aspects of ‘belief’ is that of the person who profoundly believes something about themselves or the world around them, even though all of the facts indicate otherwise!

    One example would be people with profound eating disorders who look in a mirror and see themselves as ‘fat’ even though the mirror, their family, and their doctors, are telling them that they are dangerously underweight?

    On a slightly more shallow note, I recently watched some of the entrants to the X Factor TV show, who were totally convinced of their singing and performance ablities, even though a group of judges and a several hundred people were telling them otherwise!

    How do you convince someone to change their ‘belief’ under these circumstances of delusion?

  • http:/juridicalcoherence.blogspot.com Stephen R. Diamond

    But talking privately with Richard, I now understand that he focuses on what he calls “fundamental” disagreement, where all parties are confident they share the same info and have made no analysis mistake

    s.

    There’s the nub of the matter. You and Chappell are positing that fundamental beliefs function differently. Why and how? “Fundamental disagreement” has a rational solution. If everyone is convinced of true facts producing epistemic equality, they should split the difference. What stops that solution from applying?

    What’s most striking in this piece, on rereading, is the absence of even a single example. What’s an example of one of these fundamental beliefs? An example of an internal factor that (legitimately) sanctions treating a certain class of beliefs according to different rules than other beliefs.

  • Matt Young

    mjgeddes:
    similarity + complexity = ? (new type of information theory tracking internal beliefs?)

    How about:
    similarity + complexity = channel theory (an existing form of information theory tracking channel components)

  • http://graehl.posterous.com Jonathan Graehl

    (Of course all this can be applied to “beliefs” about our own minds, if we consider influences coming from our minds as if it were something outside, from other influences.)

    This sentence literally doesn’t parse. Here’s my reconstruction:

    Say it’s a belief about some aspect of your mind. Then the parts of your mind responsible for A may be different from the aspect you’re trying to grasp (B1). But I would definitely label any spurious influences due to non-B1 parts of my mind as being A. Unless the intent in such cases is to adopt the convention that A is only about the things that are idiosyncratic to us; that if there’s some near-universal fact about human minds, then it should be called B2 instead. I guess that would be fine.

    I also felt like Robin implied that differences in A are acceptable (or at least irreconcilable). But A-differences aren’t necessarily benign. There are defects in our (individual and shared) nature that disturb me.