17 Comments

The "Chinks In The Bayesian Armor" I would list are:

<ul><li>Inability to deal with undecidable propositions;<li>The problem of the priors;<li>Hume's problem of induction;</ul>

Some more possible problems:

http://plato.stanford.edu/entries/epistemology-bayesian/#PotPro

Expand full comment

Robin, I don't think it's more rational to say "If I matter, then X" rather than just "X." Here's my argument. Suppose you hold the following beliefs:

- If I matter, then X.- If I don't matter, then not-X.

But if you don't matter, then your beliefs don't matter, so you might as well believe "If I don't matter then X." instead. Then you can simplify both of these beliefs into just "X."

Expand full comment

Dynam, it would seem more rational to say "If I matter, i.e., if I exist, am real, and am conscious, then X" rather than just "X."

Expand full comment

I think there is a more general framework, within which some of these "exceptions" can be justified. First consider how Bayesianism works in the many-worlds interpretation of QM. If you believe in MWI and also uses Bayesianism to justify your belief in "1+1=2", then you are implicitly condemning your counterparts in many other branches to false beliefs in "1+1=1", "1+1=3", etc., because their mental computers happened to make a mistake while computing 1+1. But you accept this because the measure of those branches are low and you assign less value to the epistemic status of low-measure observers. Similarly, the exceptions "Consciousness", "The Real World", and "Real Stuff" can be justified by assigning less (or no) value to the epistemic status of certain categories of observers. In other words, unconscious and unreal people end up with the false beliefs that they are conscious and real, but this doesn't matter because we don't care that those people end up with false beliefs, just as we don't care that low-measure observers in MWI end up with false beliefs.

Of course, I have no idea *why* we don't care about those who are unconscious, unreal, or have low-measure. But I think our beliefs are still justified because who we care about is a subjective value judgment not open to challenge and not requiring further justifications.

Expand full comment

We know that a) mathematicians disagree about what logical implications are acceptable, and b) there is no way to resolve that disagreement. For example, constructive mathematicians reject the principle of the excluded middle -- that is, we reject that P or not-P holds for all P. Furthermore, we cannot construct any experiments to decide whether constructive or classical mathematics is correct, because classical and constructive mathematics agree on all finite examples. (In fact, you can understand classical and constructive mathematics as two different (scientific) inductive generalizations from finite sets.)

This poses some deep objections to Bayesianism as a unique standard of rationality. As soon as you want to quantify over spaces of models, you end up needing to form a conception of infinity, of which there isn't a unique best choice. As a result, you can have two different agents who never agree, but also can never present the other with a Dutch Book argument. I think the strongest you can say is that non-Dutch-Book-ability is the test of rationality, of which traditional Bayesianism is one example.

Expand full comment

Andrew, I respond at your blog post.

Logic, the usual info/prob concept of "know" allows machines to know, even if they have no experiences.

Self, I don't see why beliefs from testimony or about other minds would be exceptions.

Scott, just yesterday I said intuitions are valid evidence, but I don't see how this resolves the above questions.

Sebastian, I lean in your direction, but this post declares me open to hearing more from the other side. Jadagul points in their direction.

Expand full comment

Eliezer,

Yes, I could agree that the problem is not with Bayesianism but with the models that are being considered. Rather than comparing model A to model B, I'd rather build a third model that includes the two as special cases: that is, continuous model expansion rather than discrete model averaging. Thus, in the ever popular "Are Newton's Laws true?" example, the point is to get away from the binary yes/no response and to recognize that, yes, the laws are false but it can be unclear how to improve them.

Bayesian statistics is like other useful theories: it works best when it has good inputs, or when its range of applicability is suitably restricted. Similarly with decision analysis, cost-benefit analysis, or other problem-solving methods.

I will say, though, that in practice there will be problems with a statistical model, and that's why we characterize Bayesian data analysis as (1) model building, (2) inference, and (3) model checking. If the model were correct, it would just be step 2, but it's not, so it's not.

More discussion here.

Expand full comment

I'll try the "consciousness" one.The answer seems fairly simple to me - we *don't* have any evidence, but if we're wrong, there won't be anyone around to experience being wrong. As such, it's a reasonable assumption to make - along the lines of only searching for your car keys where you might possibly find them.

Expand full comment

Sebastian: check out the wikipedia article here. Roughly speaking, many mathematicians believe that mathematical objects have independent existence and can be perceived directly, or some variation (there are a lot of schools of thought, again read the article). Thus you can engage them in debates about whether the Axiom of Choice, say, is 'true' or 'false'; Banach-Tarski was originally proven in an attempt to say, "See, look at how stupid the results of this axiom are! You don't really believe that." I, on the other hand, incline towards formalism and so don't think axioms have any truth content at all.

Expand full comment

A few comments on three of the cases mentioned by Robin:

Math and Concept AxiomsSome people think ... we know which math axioms are true.What does it mean for a math axiom to be true? I understand what it means for a formal system to be consistent, and that certain formal systems can be more or less useful in modeling certain aspects of our universe, but I don't understand what it means to call a formal system, or a single axiom, true.

Basic Moral Claims:Basic moral claims are claims about the optimizing process making the claim, or more concretely about aspects of the physical substrate implementing that process.If I say "X is good" this means that the entity Sebastian Hagen prefers future states of the universe with lots of X over those with less, all else being equal. Since I haven't read up on cognitive science I don't understand the details, but this is ultimately a statement about the structure of my brain. Basic moral statements only make sense when considered in the context of an optimization process making them.

Consciousness:I've read up on the concept of philosophical zombies, but I still don't really understand the case of physically identical bodies. Afaict this concept deliberately refrains from making any testable predictions. Is there any reason to conclude it is not content-free?

Expand full comment

I use Bayesian inference to summarize my uncertainty _within_ a model, but not to express my uncertainty _between_ models.

The chink that is not a chink, in the Bayesian armor, is that Bayesian inference is often so expensive as to be computationally impossible. Everyone knows and acknowledges this.

So the question is: When you don't use the Bayesian formalism to express your uncertainty between models, are you doing something this is useful and powerful and Fundamentally Beyond The Realms Of Bayesian Science? Or are you just using a cheap approximation, which only works at all because it reflects, in some fragmentary aspect, the pure and ever-glowing Eternal Bayesian Ideal?

Expand full comment

I agree with logicnazi. More specifically, your claim on consciousness should be: only the individual conscious being knows he is conscious, whereas he does not know that, arguendo, about third parties.

Your exceptions here depend on a particular flavor of epistemology. If I take an alternative view, such as, intuition is a valid form of acquiring knowledge, then many, if not all, of the exceptions become non-exceptional.

Expand full comment

Robin,

You will want to add a) beliefs based on testimony and b) beliefs about other minds.

Expand full comment

Robin: Correct me if I am mistaken, but it is my impression that we still have no clear evidence that there are multiple possible worlds, each completely self-consistent but each different. It may be that it only seems this way to us because we are limited in our ability to check models for self-consistency, so limited, in fact, that experiments were needed, for instance, to establish that objects fall at a speed independent of mass.

Andrew: Thanks for the comment.

logicnazi: Do you accept Robin's claims regarding indexicals and logical implications as non-exceptions to the Bayesian framework?

Expand full comment

How Bayesian am I?

I was reminded of the varieties of Bayesians after reading this article by Robin Hanson: [I]n our standard framework systems out there have many possible states and our minds can have many possible belief states, and interactions between minds and...

Expand full comment

How do we know we have experiences? Because we experience them. They are the DIRECT input that the proper Bayesian takes into account. If they don't fit into your favorite theory about materialism or viewpoint independence or whatever then so much the worse for them.

Yes you are correct that there could be zombies whose brains have the same workings as us but so what? Your mistake here is defining knowledge, belief and the like in terms of third party observable output. The zombie doesn't know or believe anything because it doesn't have the requisite experiences.

In short we aren't identical (only lawfully connected with) our brains so worries about how our brains might behave under different laws of physics are irrelevant to these questions about knowing.

Expand full comment