When I think of Aumann's agreement theorem, my first reflex is to average. You think A is 80% likely; my initial impression is that it's 60% likely. After you and I talk, maybe we both should think 70%. "Average your starting beliefs", or perhaps "do a weighted average, weighted by expertise" is a common heuristic.
But sometimes, not only is the best combination not the average, it's more extreme than either original belief.
Let's say Jane and James are trying to determine whether a particular coin is fair. They both think there's an 80% chance the coin is fair. They also know that if the coin is unfair, it is the sort that comes up heads 75% of the time.
Jane flips the coin five times, performs a perfect Bayesian update, and concludes there's a 65% chance the coin is unfair. James flips the coin five times, performs a perfect Bayesian update, and concludes there's a 39% chance the coin is unfair. The averaging heuristic would suggest that the correct answer is between 65% and 39%. But a perfect Bayesian, hearing both Jane's and James's estimates – knowing their priors, and deducing what evidence they must have seen - would infer that the coin was 83% likely to be unfair. [Math footnoted.]
Perhaps Jane and James are combining this information in the middle of a crowded tavern, with no pen and paper in sight. Maybe they don't have time or memory enough to tell each other all the coins they observed. So instead they just tell each other their posterior probabilities – a nice, short summary for a harried rationalist pair. Perhaps this brevity is why we tend to average posterior beliefs.
However, there is an alternative. Jane and James can trade likelihood ratios. Like posterior beliefs, likelihood ratios are a condensed summary; and, unlike posterior beliefs, sharing likelihood ratios actually works.