15 Comments

Douglas, if you have a probability distribution over "impossible possible worlds" and treat the presentation of the proof as evidence causing an update, then it is possible a bounded rationalist could avoid the conjunction fallacy in that sense. The very question itself would act as a kind of evidence about logical truths.

Expand full comment

D. Knight, I believe we just cross-posted, but I believe that my post is a more explicit example of what could be happening. Essentially, I am trying to argue that the presentation of A and B should increase your perception of the probability of B if in your previous calculation, you had not considered event A as a possible subset of all events leading to the occurrence of B.

Expand full comment

Actually, if there exists a set of mutually exclusive events, A1,A2, ... An which could result in event A happening, then

P(A)= P(A|A1)P(A1) + P(A|A2)P(A2) + ... +P(A|An)P(An)

Now suppose you are asked for the probability of A, and you can only think of a subset of A1, A2, ... An, say A1, A2 such that your estimate ofP(A)=P(A|A1)P(A1) + P(A|A2)P(A2)is much less than the true probability P(A).

Now suppose you are asked for the probability of A and A4. It is possible that your estimate of P(A and A4)=P(A|A4)P(A4) could be much larger than your previous estimate of P(A) which equaled P(A|A1)P(A1) + P(A|A2)P(A2).

Clearly, this is a case in which your estimate of P(A) could actually be much smaller than your estimate of P(A and A4), but all this implies is that your original estimate of P(A) was much smaller than it should have been since you didn't include event A4 in your calculation. In this situation, I would not say that your reasoning was fallacious, unless you do not realize that your original estimate of P(A) should have been higher, which seems to be what happens when you are presented the events A as well as A and B, at the same time, yet fallaciously decide that P(A and B) is higher than P(A).

Expand full comment

But it's still a fallacy. You can't really have P(A&B) > P(B), said the subjective Bayesian.

You don't have that in this example. What you have is that the presentation of A&B causes the probability of B to rise. (Unlike in the Linda example, where people claim explicitly that P(A&B) > P(B).)

Unless by "subjective Bayesian" you mean that you insist that we treat people as look-up tables of probabilities. I think that point of view has a lot worse problems; eg, the failure of the look-ups to commute is the problem here, not the conjunction fallacy.

Expand full comment

"In fact I think that a commenter on an earlier post got it pretty much right when implying that we are implicitly calculating the probability of the story we hear given the claim rather than the probability of the claim given the story, and that this is why we get it wrong."

In my 9/20/07 11:47 P.M. post on Burdensome details, I claimed that instead of calculating P(A and B), where B is supporting evidence of how A could happen, they are mistakenly calculating P(A given B) without downweighting the calcualtion correctly by the probability of B.

Through rearranging the definition of conditional probability, we find thatP(A and B)= P(A given B)*P(B).

It could be really easy to either forget, or to not downweight the calculation enough when P(A given B) is really high (if B happens, A is extremely likely to occur). I believe this explains Eliezer's claim"Adding detail can make a scenario SOUND MORE PLAUSIBLE, even though the event necessarily BECOMES LESS PROBABLE" (from Burdensome Details). Essentially, I believe it sounds more plausible because you are making the wrong calculation and/or not downweighting correctly.

Expand full comment

"In fact I think that a commenter on an earlier post got it pretty much right when implying that we are implicitly calculating the probability of the story we hear given the claim rather than the probability of the claim given the story, and that this is why we get it wrong."

Actually, p(story given claim)=p(claim given story). In my 9/20/07 11:47 P.M. post on Burdensome details, I claimed that instead of calculating P(A and B), where B is supporting evidence of how A could happen, they are mistakenly calculating P(A given B) without downweighting the calcualtion correctly by the probability of B.

Through rearranging the definition of conditional probability, we find thatP(A and B)= P(A given B)*P(B).

It could be really easy to either forget, or to not downweight the calculation enough when P(A given B) is really high (if B happens, A is extremely likely to occur). I believe this explains Eliezer's claim"Adding detail can make a scenario SOUND MORE PLAUSIBLE, even though the event necessarily BECOMES LESS PROBABLE" (from Burdensome Details). Essentially, I believe it sounds more plausible because you are making the wrong calculation and/or not downweighting correctly.

Expand full comment

I don't think that we know how to think about math probablistically with any confidence. At the very least, assigning probabilities to theories of probability or to the axioms of a probability theory seems incoherent. For the other examples, simple overconfidence is clearly involved in both the cat's eye and the Russian and Poland cases. In both cases, very casual consideration suggested no obvious way in which some statement could be true, but empirically such casual examination of a question does not justify placing high probability values on one's conclusions, as such conclusions are very frequently wrong.

The cat's vision example is also, to a s substantial degree, one of pure logic and definitions, as opposed to being a factual issue. If vision is defined in such and such a manner and total darkness is defined in such and such a matter then it follows logically, e.g. p = 100%, that cats and humans are equally blind in total darkness, but we have learned nothing about the actual world by such manipulation of logical tokens unless we started with incoherent beliefs. (is part of your point that as boundedly rational beings we *do* start with incoherent beliefs? This is true, but such beliefs are fallacies, and we are biased if we fail to take this into account when assigning probabilities).

Expand full comment

In Toby's post his two claims are1: Cats vision in complete darkness <= Human vision ICD2: all vision ICD = 0It cannot be the case that 2 is true but 1 is not. That makes it different from other conjunction fallacy examples.

Expand full comment

In a previous conversation about the conjunction fallacy, I used Solomonoff induction as the background to show that if a bounded rationalist is presented with proof that a short computation halts and has a particular output, this may cause their probability estimate of that output to go up; thus being presented with the conjunction of the output and the proof may result in a higher probability being assigned than being presented with the output only.

"Thus," I said, "bounded rationalists may not be able to eliminate the conjunction fallacy."

But it's still a fallacy. You can't really have P(A&B) > P(B), said the subjective Bayesian.

Incidentally, I assigned very high probability to the first proposition because of the specification of complete darkness, and then much lower probability to the second proposition because there are so many animals, and a universal generalization over them has so many more chances to be wrong. Think of all the burdensome extensional details implied by such a sweeping generalization! What about fireflies? What about bats?

Expand full comment

Toby, whoops, I was writing while you were posting, so my 1st paragraph concerns were pretty much answered in your 10:39am comment.

Expand full comment

I'm unconvinced that the OP has demonstrated an exception to the conjunction fallacy. "[unlikely-sounding-mathematical-claim] and [lemma1] and [lemma2] and [lemma3]" seems to remain less likely to be true than "[unlikely-sounding-mathematical-claim]". The idea that we should upward revise the probability of "[unlikely-sounding-mathematical-claim]" upon being told "[lemma1] and [lemma2] and [lemma3]" is an empirical question, it's an interesting, messy, complicated situation that I doubt gives way to a simple heuristic. But I like that the OP has modeled something that's probably closer to how we actually think and build knowledge as bounded rational agents.

I think this blog could do a lot more to explore bounded rationality (bounded by neuroanatomy, culture, language, lifespan, length of human existence, climate, entropy, etc.) and how it affects historic and future prospects (for example, for solving personal persistence and existential challenges).

Expand full comment

Wei, you are right that in a bounded rationality framework we can't always know how long to spend improving our cognitive abilities (via overcoming bias or other methods). However, we can still justify some approximations more than others. I agree with Robin and Eliezer that much more effort needs to be put into it in general (research on the biases and some form of education that makes most people aware of the biases -- perhaps crystalizable into some handy proverbs?). However a question like that of whether people in particular positions should be spending more time on overcoming bias is a question worth thinking about. Oh, and I think that experimentation can be a fruitful way to succeed. This weblog is something of an experiment by Robin and Eliezer in terms of fostering research and educating the public.

Robin, the conjunction fallacy is not a logical fallacy. You cannot state it in pure logic. You need axioms of rationality or probability to state it (and prove it is fallacious). It is thus a fallacy of probability theory or of (unbounded) rationality. It is not a fallacy of bounded rationality for the reasons I outlined. However, I do agree that my particular explanation doesn't apply to most cases and that we are often biased when we find a stronger claim more likely than we would have found one which was strictly weaker.

In fact I think that a commenter on an earlier post got it pretty much right when implying that we are implicitly calculating the probability of the story we hear given the claim rather than the probability of the claim given the story, and that this is why we get it wrong. Unlike some commenters, I'm not claiming that this is a reasonable misinterpretation of the question. I think people often do this in error. After all people have infamous trouble finding the difference between all Xs are Ys and all Ys are Xs and this is a very similar mistake.

Anders, I agree with your comments on the relevance of this to predictions involving possible pathways.

Expand full comment

I think this is very relevant to claims about the future. A bare claim like "enhancing medicine will be legal and popular in most western countries by 2020" does not sound as convincing as the claim plus a sketch of how we might get there (i.e. patient empowerment, rise of preventative medicine, medical tourism, etc etc). But unlike the lemmas above, these additions are just possible pathways - it might turn out that drug liberalization will be far more important than patient empowerment for making it legal to buy over-the-counter stimulants. Good arguments of this type show likely driving forces the listener might not have thought of, weak arguments of this type just pour on possibilities. But even showing that there is a lot of possible paths to a previously unlikely future is enough to increase its posterior probability. The problem is of course that we will easily get taken in by details and stories and overestimate the power of supporting lemmas when they are phrased as "future histories".

Expand full comment

It is not clear to me that you are in fact denying that the conjunction fallacy is fallacious. You seem to be describing how it might be an understandable outcome of a reasonable mind. Surely one can claim something is a "bias" without claiming that infinite effort should be put into eliminating it in all circumstances.

Expand full comment

With bounded rationality, how can we know whether any piece of cognitive advice, whether aimed at "overcoming bias", or anything else, actually has a net benefit? For example, perhaps paying more conscious attention to our biases actually makes us worse off in the long run because it takes mental resources away from other activities that have better cost benefit ratios.

The situation seems analogous to the one in medicine. In both cases, we don't have a sufficiently good mathematical model of the system that we're trying to influence to predict the overall effects of an intervention based on theory alone. In medicine we already know quite a bit about anatomy, physiology, biochemistry, etc., but still the complexity of the human body prevents us from predicting the effects of a drug through mathematical modeling and simulation alone, so we rely instead on randomized clinical trials. Perhaps we should do the same for cognitive advice?

Expand full comment