Chinks In The Bayesian Armor

To judge if beliefs are biased, Eliezer, I, and others here rely heavily on our standard ("Bayesian") formal theories of information and probability.  These are by far the main formal approaches to such issues in physics, economics, computer science, statistics, and philosophy.  They fit well with many, but not all, specific intuitions we have about what are reasonable beliefs.

There are, however, a number of claimed exceptions, cases where many people think certain beliefs are justified even though they seem contrary to this standard framework.  This interferes with our efforts to overcome bias, at it allows people with beliefs contrary to this standard framework to claim their beliefs are yet more exceptions.  I am thus tempted to reject all claimed exceptions, but that wouldn’t be fair.  So I’m instead raising the issue and offering a quick survey of claimed exceptions.  Perhaps future posts can consider each one of these in more detail.

To review, in our standard framework systems out there have many possible states and our minds can have many possible belief states, and interactions between minds and systems allow their states to become correlated.  This correlation lets minds have beliefs about systems that correlate with the states of those systems.  The exact degree of belief appropriate depends on our beliefs about the correlation, and can be expressed with exact but complex mathematical expressions.   

OK, the following do not seem to be exceptions:

Indexicals States of physical systems are usually defined from the view of a neutral third party, e.g., what objects are where in space-time.  But people in such a system can also be uncertain about their "index" which says where they are in that system, e.g., where they are in space-time.  While this introduces interesting new issues, once one introduces a larger set of indexical states, it seems the standard framework works just fine.

Logical Implications – These are the consequences of math axioms, or of concept definitions.  As Eliezer tried recently to make clear, logical implications are not exceptions; they in fact fit just fine in the standard framework.  When we arrange for error-prone devices (including our minds) to compute implications, the output of such devices are info we can use to draw conclusions about those implications.  While the implications themselves are the same in all states, our error-prone beliefs cannot be completely certain of them. 

Here are possible exceptions:

Math and Concept Axioms – Some people think that we know more about math than theorems saying what axioms imply what consequences; they think we know which math axioms are true.  This is more than saying which mathematical abstractions are how useful in our actual universe.  Similarly, many say we know which of the many possible concept definitions are the true ones.  But it is not clear how our mental states could have become correlated with such math or concept truths. 

Basic Moral Claims – Whether it is right to kill someone can depend on whether they were in fact a murderer, so a moral belief can depend on ordinary beliefs.  But we can extract "basic" moral claims which do not so depend, such as whether it can ever be right to kill.  Some say basic moral claims are really claims about preferences, while others say they are about what social norms were most adaptive for our ancestors.  But most people insist both that moral claims are not just about physical or mental states, and that we have reliable beliefs about such claims.  But how could such reliable beliefs arise?

Consciousness – Zombies are imagined creatures with physical bodies identical to ours, but with no inner life or subjective experience; there is nothing it is like to be a zombie.  Since zombies would claim to experience consciousness just as we do, our brains have no info whatsoever suggesting that they are not zombies.  But, according to David Chalmers and others, we are in fact conscious and we in fact know this.  If so, how do we know?

The Real World – A possible world is a completely self-consistent description of how things could be.  Each person in such a possible worlds has all the same sort of relations to systems and info in that world that we do to systems and info in our world.  So they have just as much info suggesting they exist as we have suggesting we exist.  David Lewis famously claimed all possible words are just as real as ours.  But most people believe that only one of the many possible worlds is the real world, and that we correctly believe we are in the one real world.  If so, how do we know?

Real Stuff – Physics models that many think "end" at some point often allow "analytic continuations" where the math is naturally extended to larger models.  For example, space-time can be thought of as ending in the middle of a black hole, or as continuing on out into new regions.  Similarly, some say the projection postulate in quantum mechanics destroys all but one branch, while others say all branches continue on independently.  Those who say analytic continuations are unreal are saying people described by those continuations are unreal, even though they have the same local info relations as real people.   How do real people know they are real?

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • http://www.stat.columbia.edu/~gelman/blog/ Andrew Gelman

    Robin

    Let me comment from my position as a Bayesian statistician–that is, someone who applies Bayesian methods to statistical analysis. I do not actually find the Bayesian approach to be useful in characterizing my belief states. To be more precise, I use Bayesian inference to summarize my uncertainty _within_ a model, but not to express my uncertainty _between_ models. Rather, I will hypothesize a model, use it to make predictions (forecasts and hindcasts) and then evaluate the model based on the fit of the predictions to data and prior information.

    We discuss this issue a bit more in chapter 6 of Bayesian Data Analysis, in the discussion of model checking, discrete model averaging, and continuous model averaging.

    To take a historical example, I don’t find it useful, from a statistical perspective, to say that in 1850, say, our posterior probability that Newton’s laws were true was 99%, then in 1900 it was 50%, then by 1920, it was 0.01% or whatever. I’d rather say that Newton’s laws were a good fit to the available data and prior information back in 1850, but then as more data and a clearer understanding became available, people focused on areas of lack of fit in order to improve the model.

    In the areas where I work, models are never correct or even possibly correct, but it can be useful to use Bayesian inference as a tool to assess uncertainty within a fitted model. Also, and perhaps just as importantly, Bayesian inference is useful in creating probabilistic forecasts and hindcasts that can be compared to real data in order to assess aspects of model misfit. I know that other people have found Bayesian methods useful more directly for model selection (i.e., discrete model averaging, computing the posterior probability that each particular candidate model is true) but this has just about never worked for me; see the aforementioned chapter 6 or this article from Sociological Methodology for more discussion.

  • http://profile.typekey.com/logicnazi/ logicnazi

    How do we know we have experiences? Because we experience them. They are the DIRECT input that the proper Bayesian takes into account. If they don’t fit into your favorite theory about materialism or viewpoint independence or whatever then so much the worse for them.

    Yes you are correct that there could be zombies whose brains have the same workings as us but so what? Your mistake here is defining knowledge, belief and the like in terms of third party observable output. The zombie doesn’t know or believe anything because it doesn’t have the requisite experiences.

    In short we aren’t identical (only lawfully connected with) our brains so worries about how our brains might behave under different laws of physics are irrelevant to these questions about knowing.

  • http://www.stat.columbia.edu/~cook/movabletype/archives/2007/10/how_bayesian_am.html Statistical Modeling, Causal Inference, and Social Science

    How Bayesian am I?

    I was reminded of the varieties of Bayesians after reading this article by Robin Hanson: [I]n our standard framework systems out there have many possible states and our minds can have many possible belief states, and interactions between minds and…

  • michael vassar

    Robin: Correct me if I am mistaken, but it is my impression that we still have no clear evidence that there are multiple possible worlds, each completely self-consistent but each different. It may be that it only seems this way to us because we are limited in our ability to check models for self-consistency, so limited, in fact, that experiments were needed, for instance, to establish that objects fall at a speed independent of mass.

    Andrew: Thanks for the comment.

    logicnazi: Do you accept Robin’s claims regarding indexicals and logical implications as non-exceptions to the Bayesian framework?

  • Selfreferencing

    Robin,

    You will want to add a) beliefs based on testimony and b) beliefs about other minds.

  • http://catallarchy.net/blog Scott Scheule

    I agree with logicnazi. More specifically, your claim on consciousness should be: only the individual conscious being knows he is conscious, whereas he does not know that, arguendo, about third parties.

    Your exceptions here depend on a particular flavor of epistemology. If I take an alternative view, such as, intuition is a valid form of acquiring knowledge, then many, if not all, of the exceptions become non-exceptional.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I use Bayesian inference to summarize my uncertainty _within_ a model, but not to express my uncertainty _between_ models.

    The chink that is not a chink, in the Bayesian armor, is that Bayesian inference is often so expensive as to be computationally impossible. Everyone knows and acknowledges this.

    So the question is: When you don’t use the Bayesian formalism to express your uncertainty between models, are you doing something this is useful and powerful and Fundamentally Beyond The Realms Of Bayesian Science? Or are you just using a cheap approximation, which only works at all because it reflects, in some fragmentary aspect, the pure and ever-glowing Eternal Bayesian Ideal?

  • http://www.memespace.net Sebastian Hagen

    A few comments on three of the cases mentioned by Robin:

    Math and Concept Axioms
    Some people think … we know which math axioms are true.
    What does it mean for a math axiom to be true? I understand what it means for a formal system to be consistent, and that certain formal systems can be more or less useful in modeling certain aspects of our universe, but I don’t understand what it means to call a formal system, or a single axiom, true.

    Basic Moral Claims:
    Basic moral claims are claims about the optimizing process making the claim, or more concretely about aspects of the physical substrate implementing that process.
    If I say “X is good” this means that the entity Sebastian Hagen prefers future states of the universe with lots of X over those with less, all else being equal. Since I haven’t read up on cognitive science I don’t understand the details, but this is ultimately a statement about the structure of my brain. Basic moral statements only make sense when considered in the context of an optimization process making them.

    Consciousness:
    I’ve read up on the concept of philosophical zombies, but I still don’t really understand the case of physically identical bodies. Afaict this concept deliberately refrains from making any testable predictions. Is there any reason to conclude it is not content-free?

  • Jadagul

    Sebastian: check out the wikipedia article here. Roughly speaking, many mathematicians believe that mathematical objects have independent existence and can be perceived directly, or some variation (there are a lot of schools of thought, again read the article). Thus you can engage them in debates about whether the Axiom of Choice, say, is ‘true’ or ‘false’; Banach-Tarski was originally proven in an attempt to say, “See, look at how stupid the results of this axiom are! You don’t really believe that.” I, on the other hand, incline towards formalism and so don’t think axioms have any truth content at all.

  • Svein Ove

    I’ll try the “consciousness” one.
    The answer seems fairly simple to me – we *don’t* have any evidence, but if we’re wrong, there won’t be anyone around to experience being wrong. As such, it’s a reasonable assumption to make – along the lines of only searching for your car keys where you might possibly find them.

  • http://www.stat.columbia.edu/~gelman/blog/ Andrew Gelman

    Eliezer,

    Yes, I could agree that the problem is not with Bayesianism but with the models that are being considered. Rather than comparing model A to model B, I’d rather build a third model that includes the two as special cases: that is, continuous model expansion rather than discrete model averaging. Thus, in the ever popular “Are Newton’s Laws true?” example, the point is to get away from the binary yes/no response and to recognize that, yes, the laws are false but it can be unclear how to improve them.

    Bayesian statistics is like other useful theories: it works best when it has good inputs, or when its range of applicability is suitably restricted. Similarly with decision analysis, cost-benefit analysis, or other problem-solving methods.

    I will say, though, that in practice there will be problems with a statistical model, and that’s why we characterize Bayesian data analysis as (1) model building, (2) inference, and (3) model checking. If the model were correct, it would just be step 2, but it’s not, so it’s not.

    More discussion here.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Andrew, I respond at your blog post.

    Logic, the usual info/prob concept of “know” allows machines to know, even if they have no experiences.

    Self, I don’t see why beliefs from testimony or about other minds would be exceptions.

    Scott, just yesterday I said intuitions are valid evidence, but I don’t see how this resolves the above questions.

    Sebastian, I lean in your direction, but this post declares me open to hearing more from the other side. Jadagul points in their direction.

  • Neel Krishnaswami

    We know that a) mathematicians disagree about what logical implications are acceptable, and b) there is no way to resolve that disagreement. For example, constructive mathematicians reject the principle of the excluded middle — that is, we reject that P or not-P holds for all P. Furthermore, we cannot construct any experiments to decide whether constructive or classical mathematics is correct, because classical and constructive mathematics agree on all finite examples. (In fact, you can understand classical and constructive mathematics as two different (scientific) inductive generalizations from finite sets.)

    This poses some deep objections to Bayesianism as a unique standard of rationality. As soon as you want to quantify over spaces of models, you end up needing to form a conception of infinity, of which there isn’t a unique best choice. As a result, you can have two different agents who never agree, but also can never present the other with a Dutch Book argument. I think the strongest you can say is that non-Dutch-Book-ability is the test of rationality, of which traditional Bayesianism is one example.

  • Dynamically Linked

    I think there is a more general framework, within which some of these “exceptions” can be justified. First consider how Bayesianism works in the many-worlds interpretation of QM. If you believe in MWI and also uses Bayesianism to justify your belief in “1+1=2”, then you are implicitly condemning your counterparts in many other branches to false beliefs in “1+1=1”, “1+1=3”, etc., because their mental computers happened to make a mistake while computing 1+1. But you accept this because the measure of those branches are low and you assign less value to the epistemic status of low-measure observers. Similarly, the exceptions “Consciousness”, “The Real World”, and “Real Stuff” can be justified by assigning less (or no) value to the epistemic status of certain categories of observers. In other words, unconscious and unreal people end up with the false beliefs that they are conscious and real, but this doesn’t matter because we don’t care that those people end up with false beliefs, just as we don’t care that low-measure observers in MWI end up with false beliefs.

    Of course, I have no idea *why* we don’t care about those who are unconscious, unreal, or have low-measure. But I think our beliefs are still justified because who we care about is a subjective value judgment not open to challenge and not requiring further justifications.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Dynam, it would seem more rational to say “If I matter, i.e., if I exist, am real, and am conscious, then X” rather than just “X.”

  • Dynamically Linked

    Robin, I don’t think it’s more rational to say “If I matter, then X” rather than just “X.” Here’s my argument. Suppose you hold the following beliefs:

    – If I matter, then X.
    – If I don’t matter, then not-X.

    But if you don’t matter, then your beliefs don’t matter, so you might as well believe “If I don’t matter then X.” instead. Then you can simplify both of these beliefs into just “X.”

  • Tim Tyler | http://timtyler.org/

    The “Chinks In The Bayesian Armor” I would list are:

    • Inability to deal with undecidable propositions;
    • The problem of the priors;
    • Hume’s problem of induction;

    Some more possible problems:

    http://plato.stanford.edu/entries/epistemology-bayesian/#PotPro