Tag Archives: Bayesian

What Conjunction Fallacy?

Edi Karni:

This paper reports the results of a series of experiments designed to test whether and to what extent individuals succumb to the conjunction fallacy. Using an experimental design of Kahneman and Tversky (1983), it finds that given mild incentives, the proportion of individuals who violate the conjunction principle is significantly lower than that reported by Kahneman and Tversky. Moreover, when subjects are allowed to consult with other subjects, these proportions fall dramatically, particularly when the size of the group rises from two to three. These findings cast serious doubts about the importance and robustness of such violations for the understanding of real-life economic decisions.

Hat tip to Dan Houser.

GD Star Rating
loading...
Tagged as: ,

Rarity Anomalies Remain

Our choices apparently under-weigh rare events when we experience track records, even though we accurately estimate the frequencies of those events.  We over-weigh rare events, however, when we are told their probabilities.  Simple explanations of these anomalies are shot down in a recent Psychological Science:

When making decisions involving risky outcomes on the basis of verbal descriptions of the outcomes and their associated probabilities, people behave as if they overweight small probabilities. In contrast, when the same outcomes are instead experienced in a series of samples, people behave as if they underweight small probabilities. We present two experiments showing that the existing explanations of the underweighting observed in decisions from experience are not sufficient to account for the effect. Underweighting was observed when participants experienced representative samples of events, so it cannot be attributed to undersampling of the small probabilities. In addition, earlier samples predicted decisions just as well as later samples did, so underweighting cannot be attributed to recency weighting. Finally, frequency judgments were accurate, so underweighting cannot be attributed to judgment error. Furthermore, we show that the underweighting of small probabilities is also reflected in the best-fitting parameter values obtained when prospect theory, the dominant model of risky choice, is applied to the data.
GD Star Rating
loading...
Tagged as:

Share likelihood ratios, not posterior beliefs

When I think of Aumann's agreement theorem, my first reflex is to average.  You think A is 80% likely; my initial impression is that it's 60% likely.  After you and I talk, maybe we both should think 70%.  "Average your starting beliefs", or perhaps "do a weighted average, weighted by expertise" is a common heuristic.

But sometimes, not only is the best combination not the average, it's more extreme than either original belief.

Let's say Jane and James are trying to determine whether a particular coin is fair.  They both think there's an 80% chance the coin is fair.  They also know that if the coin is unfair, it is the sort that comes up heads 75% of the time.

Jane flips the coin five times, performs a perfect Bayesian update, and concludes there's a 65% chance the coin is unfair.  James flips the coin five times, performs a perfect Bayesian update, and concludes there's a 39% chance the coin is unfair.  The averaging heuristic would suggest that the correct answer is between 65% and 39%.  But a perfect Bayesian, hearing both Jane's and James's estimates – knowing their priors, and deducing what evidence they must have seen - would infer that the coin was 83% likely to be unfair.  [Math footnoted.]

Perhaps Jane and James are combining this information in the middle of a crowded tavern, with no pen and paper in sight.  Maybe they don't have time or memory enough to tell each other all the coins they observed.  So instead they just tell each other their posterior probabilities – a nice, short summary for a harried rationalist pair.  Perhaps this brevity is why we tend to average posterior beliefs.

However, there is an alternative.  Jane and James can trade likelihood ratios.  Like posterior beliefs, likelihood ratios are a condensed summary; and, unlike posterior beliefs, sharing likelihood ratios actually works.

Continue reading "Share likelihood ratios, not posterior beliefs" »

GD Star Rating
loading...
Tagged as: , ,

Different meanings of Bayesian statistics

I had a discussion with Christian Robert about the mystical feelings that seem to be sometimes inspired by Bayesian statistics.  The discussion originated with an article by Eliezer so it seemed appropriate to put the discussion here on Eliezer's blog.  As background, both Christian and I have done a lot of research on Bayesian methods and computation, and we've also written books on the topic, so in some ways we're perhaps too close to the topic to be the best judge of how a newcomer will think about Bayes.

Christian began by describing Eliezer's article about constructing Bayes’ theorem for simple binomial outcomes with two possible causes as "indeed funny and entertaining (at least at the beginning) but, as a mathematician, I [Christian] do not see how these many pages build more intuition than looking at the mere definition of a conditional probability and at the inversion that is the essence of Bayes’ theorem. The author agrees to some level about this . . . there is however a whole crowd on the blogs that seems to see more in Bayes’s theorem than a mere probability inversion . . . a focus that actually confuses—to some extent—the theorem [two-line proof, no problem, Bayes' theorem being indeed tautological] with the construction of prior probabilities or densities [a forever-debatable issue].

I replied that there are several different points of fascination about Bayes:

Continue reading "Different meanings of Bayesian statistics" »

GD Star Rating
loading...
Tagged as:

Beliefs Require Reasons, or: Is the Pope Catholic? Should he be?

In the early days of this blog, I would pick fierce arguments with Robin about the no-disagreement hypothesis.  Lately, however, reflection on things like public reason have brought me toward agreement with Robin, or at least moderated my disagreement.  To see why, it’s perhaps useful to take a look at the newspapers

the pope said the book “explained with great clarity” that “an interreligious dialogue in the strict sense of the word is not possible.” In theological terms, added the pope, “a true dialogue is not possible without putting one’s faith in parentheses.”

What are we to make of a statement like this?

Continue reading "Beliefs Require Reasons, or: Is the Pope Catholic? Should he be?" »

GD Star Rating
loading...
Tagged as: , , ,

All Hail Info Theory

Hard questions are often hard because different ways to think about them conflict.  When each way seems to have strong support, we are reluctant to choose.  But if we cannot avoid the conflict, choose we must.  For example, last October I wrote:

Our standard ("Bayesian") formal theories of information and probability … are by far the main formal approaches to such issues in physics, economics, computer science, statistics, and philosophy.  … There are, however, a number of claimed exceptions, cases where many people think certain beliefs are justified even though they seem contrary to this standard framework. … I am … tempted to reject all claimed exceptions, but that wouldn’t be fair.  So I’m instead raising the issue and offering a quick survey of claimed exceptions. … The following do not seem to be exceptions: Indexicals … Logical Implications … Here are possible exceptions:  Math and Concept Axioms Basic Moral ClaimsConsciousnessThe Real World Real Stuff … [I could have added religious beliefs to this list.]

Actually, in all these cases it seems it is standard info theory (i.e., info is whatever excludes possibilities) alone that seems to conflict with something else – probability theory is irrelevant.  And it seems to me that: Nothing that seems to conflict with standard info theory is remotely as well established as it is.  So when there is a conflict, info theory must just win.  (More are willing to challenge standard "Bayesian" probability theory – e.g., see Andrew Gelman, Scott Aaronson.) 

Continue reading "All Hail Info Theory" »

GD Star Rating
loading...
Tagged as:

Chinks In The Bayesian Armor

To judge if beliefs are biased, Eliezer, I, and others here rely heavily on our standard ("Bayesian") formal theories of information and probability.  These are by far the main formal approaches to such issues in physics, economics, computer science, statistics, and philosophy.  They fit well with many, but not all, specific intuitions we have about what are reasonable beliefs.

There are, however, a number of claimed exceptions, cases where many people think certain beliefs are justified even though they seem contrary to this standard framework.  This interferes with our efforts to overcome bias, at it allows people with beliefs contrary to this standard framework to claim their beliefs are yet more exceptions.  I am thus tempted to reject all claimed exceptions, but that wouldn’t be fair.  So I’m instead raising the issue and offering a quick survey of claimed exceptions.  Perhaps future posts can consider each one of these in more detail.

To review, in our standard framework systems out there have many possible states and our minds can have many possible belief states, and interactions between minds and systems allow their states to become correlated.  This correlation lets minds have beliefs about systems that correlate with the states of those systems.  The exact degree of belief appropriate depends on our beliefs about the correlation, and can be expressed with exact but complex mathematical expressions.   

OK, the following do not seem to be exceptions:

Continue reading "Chinks In The Bayesian Armor" »

GD Star Rating
loading...
Tagged as: ,

Are More Complicated Revelations Less Probable?

Consider two possible situations, A and B. In situation A, we come across a person–call him "A"–who makes the following claim: "I was abducted by aliens from the planet Alpha; they had green skin." In situation B, we come across a different person–call him "B"–who tells us, "I was abducted by aliens from the planet Beta; they had blue skin, they liked to play ping-pong, they rode around on unicycles, and their favorite number was 7." In either situation, we are likely to assign low subjective probability to the abduction claim that we hear. But should we assign higher subjective probability to the claim in one situation more than in the other?

Mindful of Occam’s razor, and careful to avoid the type of reasoning that leads to the conjunction fallacy, we might agree that A’s claim is, in itself, more probable, because it is less specific. However, we have to condition our probability assessment on the evidence that A or B actually made his claim. While B’s claim is less intrinsically likely, the hypothesis that B’s claim is true has strong explanatory power to account for why B made the specific statements he did. Thus, in the end it may not be so obvious whether we should believe A’s claim more in situation A than we believe B’s claim in situation B.

Continue reading "Are More Complicated Revelations Less Probable?" »

GD Star Rating
loading...
Tagged as: , ,

They’re Telling You They’re Lying!

Robert Aumann, winner of the Nobel Prize in Economics for his contributions to game theory, is vocally opposed to peace gestures that Israel either has made or that people have suggested it should make.  His basic message can be summarized in the following passage:

Continue reading "They’re Telling You They’re Lying!" »

GD Star Rating
loading...
Tagged as: , ,

Bayes: radical, liberal, or conservative?

I wrote the following (with Aleks Jakulin) to introduce a special issue on Bayesian statistics of the journal Statistica Sinica (volume 17, 422-426).  I think the article might be of interest even to dabblers in Bayes, as I try to make explicit some of the political or quasi-political attitudes floating around the world of statistical methodology.

Continue reading "Bayes: radical, liberal, or conservative?" »

GD Star Rating
loading...
Tagged as: