In statistics it is relatively easy to reason about expected values. And the statistics-based literature on the rationality of disagreement that I have worked on has interpreted typical human opinions (about matters of fact as opposed to value) as expected values (which include probabilities). But I wonder: is this a big mistake?
I like the Easter egg hunt example. A good strategy to get someone in the group to find the egg is for each person to randomly sample the probability distrubtion and search at that spot. However, a selfish individual who only wants to find the egg for herself would search at the peak of the probability distribution. If everyone follows that argmax rule then the group outcome is suboptimal.
Say that people can't hold an entire probability distribution as a belief, but rather can only hold a point estimate. If people have to use these argmax estimates then we're all in trouble -- people will gravitate towards local maxima instead of taking into account all the possibilities.
You can look for an easter egg in three typical ways (assuming you can't hang back and wait for the others to do so):1) You individually figure out an algorithm to determine where to look for2) You choose at random, using the probability distribution3) You split the domain of research quasi equaly, and each go over your own bit
Of the three, 3) is the most equitable (everyone has roughly equal chances of finding the egg once the search starts). 2) is quite unequal (choosing zones at randon means a lot of overlap - those few out on their own have much more chances of finding the egg). 1) is the most interesting; at least initially, it will probably be even less equal than 2).
However, if we iterate the search multiple times, and each searcher changes their algorithm to improve it each time, then we get an interesting situation. We'd probably something in between 2) and 3), born of the twin urges to differentiate yourself from the pack and try and search the most likely areas.
These algorithms then become biases, usefull biases. They're usefull because they are different from each other, so cover the field more effectively than randomness. They would be usefull even if most of the searchers were searching randomly, as long as a small group isn't.
They would look just like true biases: use the algorithm to compute a probability distribution for where to search. If one took this new probability distribution to be "true" one would search better than if one took the genuine one to be "true".
This may be an argument in favour of "diversity" - ensuring that different areas are searched outweighs the need to have everyone as accurate as possible.
When does this become harmfull? When the global probability estimate (summing up all the individual ones) no longer resembles the true estimate. This goes back to the whole issue of the wisdom of crowds.
And thinking about it, there may be such a factor afoot in the football example. In everyday life, if you are betting on the games, then you have the incentive to be good, but different (as the odds are set by the betting levels). Maybe this is one factor explaining the overall wisdom of the crowd?
i'm not sure it's a tight distinction. most games will have elements of both - but I guess the sort of thing i'm thinking of is distinguishing between say, getting better models of climate prediction on the one hand (advancing knowledge), and making policy decisions based on what we currently know about climate change on the other (which might also advance knowledge, but only indirectly). perhaps part of the problem is that we're necessarily trying to play both games at once.
"But that is a relatively easy part of beating the market. You can't beat the market unless you also overcome your overconfidence bias enough to avoid disagreeing with the market about the very large fraction of data that the market analyzes as well as or better than you do.Overcoming your overconfidence bias is usually the hardest part of investing. People who do poor jobs of that underperform those who invest randomly."
This makes sense, but I don't see how it changes the nature of the payoffs. All you're saying if I understand you correctly, is that the market doesn't over or underbet nearly as often as most investors think it does (agreed from what speculation and study I have done), and that there are a lot of plausible sounding analyses about market inefficiencies that turn out to be dead wrong (also agreed). Most financial markets are very efficient and most investors do not succeed in beating them.
But on those occasions when the market actually under or overbets, there's a profit opportunity. And in zero-sum markets (like tradesports), exploiting those opportunities is the only way you can make any money at all.
If markets were not like the easter egg hunt, then it would be profitable to buy any company you expected to perform well, and short any company you expected to perform poorly. But the stock price represents aggregate information based not just on company performance expectations, but on market expectations. So a stock performs well only if the market has not overbought it.
Tradesports provides the more obvious demonstration. The colts were trading at 68-32 to win on Tradesports just before kickoff. If you thought their chances were 52%, you would bet the bears even though you expected the colts to win. If your estimate is correct, this bet is good and earns money (in EV), even though you are betting against the team you think will win. This is analogous to searching an undersearched area in the easter egg hunt, rather than the area with the highest probability.
Of the two questions at the end of this post, I think the second is much more interesting. What sort of distributions of rewards do we want for the goals we have?
Here's an intuition -- I have nothing to back it up, but it might be food for some thought. We really have two goals: (a) the discovery of truth, (b) the dissemination of truth (and its consensus-based use in solving problems). Different reward distributions might be more appropriate for those goals: if we want to discover truth, we might be better off with rewards and sanctions like the easter egg hunt to encourage a broad spread of opinion over the space of possible truths (cf. chapter 2 of Mill, On Liberty again). If we want to come to a consensus on already-discovered truth, we might be better off with a set of rewards and sanctions along the disagreement hypothesis lines.
Likewise, if we understand truth as in some sense its own reward, then we might want to structure our incentives such that where there are multiple truths, there is a broader spread of searchers -- easter egg hunting makes a particularly great metaphor here: if there are a broad range of "true" views of the good life, then perhaps we ought to have incentives that encourage people to disperse a little broader (cf., yet again, Mill -- this time chapter 3 of On Liberty) that field of easter eggs. (Both because they might find new eggs, and because they'll reduce the competition -- i.e. for resources necessary to live out a life-choice -- for the already discovered eggs.)
This comment may be going a little off the rails with the metaphor, but I think some of these considerations are worth thinking about.
As someone who makes a living speculating in stocks, I don't agree that financial markets are more like easter egg hunts.It is true that in order to beat the market you need to engage in a strategy that includes some of the easter egg hunt features (i.e. you need to pay attention to some information that others are not paying adequate attention to).But that is a relatively easy part of beating the market. You can't beat the market unless you also overcome your overconfidence bias enough to avoid disagreeing with the market about the very large fraction of data that the market analyzes as well as or better than you do.Overcoming your overconfidence bias is usually the hardest part of investing. People who do poor jobs of that underperform those who invest randomly.
pdf, when an economist asks for the payoff of a game, he doesn't just want a phrase describing it, he wants a function mapping choices to payoffs.
David, sure real life is complicated, but social scientists progress by charging ahead and trying to find understandable parts of it to model.
Conchis, your hypothesis would be clearer if you offered a concrete alternative game that describes "advance knowledge."
The striking thing about the cooperative easter egg hunt analogy seems to me to be that all that matters in the end is that one person is right (and that if/when one person is right, that will quickly become obvious). How many real-world truth-seeking situations are like this? If we distinguish between games where the point is to advance knowledge as opposed to games where the point is to make a decision based on current (fixed) knowledge, it strikes me that distributed effort is likely to be more valuable in the former case (e.g. allocation of R&D drug expenditure, scientific research effort more generally) than the latter.
In this particular case, there's an advantage to being in the minority: to maximise your chances, you want the zone that you search to have the least amount of other people searching it (proportional to your intial probability estimate, of course).
There is a chapter going into this in the popularisation book Critical Mass ; I'll see if I can dig up something relevant to this situation.
Real-life payoffs are so much more complicated than theoretical ones. Even with monetary payoffs, people don't see those in as linear a way as mathematics does. When I bought lottery tickets I knew the odds were 18 million to 1 against me and the payoff was something less than $18 million, but $1 is meaningless to me, while anything above $1 million is indescribably more than that, so my being cheated some by administrative costs, money for the schools, and taxes didn't deter me. At the same time, I knew that the even greater reason for my buying one ticket was because I could feel hope for a few days. It didn't matter how much of a longshot that hope was. It was something. I notice I haven't bought any tickets in several years. I'm more financially secure now. I don't value the hope as I did.
Similarly there are issues about the payoff of conformity or contrarianism for different people that may vary in the same individual over time. There is the issue of what sort of existential joy anyone gets from a specific behavior related to forming an opinion or acting on it. I don't think it's a surprise that such things are not well behaved statistically.
Robin, maybe I don't understand exactly what you mean. The payoffs of the strategies I mention are greater status in the community the arguments are made in, which includes things like future deference to your opinion, or willingness of others to more fully engage your opinion when they see it as wrongheaded or silly (as opposed to just ignoring it). The choices are "which position do I stake out in this discussion?", and "who do I side with?". I'm not sure what you mean by "info".
pdf, by asking for the game, I mean to ask for the choices, info, and payoffs, rather that for commonly observed strategies.
zzz, search can be very different when there is more than one searcher competing.
"Imagine an Easter egg hunt where there was only one egg. Even if we all had the same abilities and all agreed on a probability distribution over where the egg might be, we should still search in different parts of the yard. We should spread ourselves out, with the number of people in each region being roughly proportional to the chance of finding the egg in that region."
In Bayesian search theory, you're supposed to start at the point of highest probability and scan over progressively lower probability areas, while continuosly revising your estimate. Distributed search is more complicated, see e.g. Garcia et al., "Distributed On-line Bayesian Search".
Even in statistics, when rolling a fair die, for instance, the expected value is 3.5 but that's completely unrealistic (in fact it will never happen!).
Well, one opinion game is the "earn respect by being more extreme than your peers" strategy. The sort of thing that drives Rush Limbaugh to ever-more-extreme rhetoric in order to get more outrage and thus listenership. There's the "a pox on both their houses" strategy, which is like an ultra-cynical moral high ground. There's the "middle path" strategy, which is an attempt to earn status by being the "moderate and reasonable" one, with the payoff if one is proven right. All these things are tendencies that lead people to adopt positions in reaction to social dynamics (and specifically, positions that few or no other participants share) rather than pure truth seeking.
Of course, some of these might be justified in that they can help a group come to a consensus on an issue, and groups can be right more often that individuals.
The thing that makes financial bets apparently irrational is the choice to bet at all. If people were risk-loving, then it would be rational for them to make financial bets, and to make different ones. The Easter egg hunt is rational for the risk-averse only if they do not have to pay an entrance fee equal to the value of the egg divided by then number of players. In ordinary opinion games there might be a signaling penalty for not having an opinion; perhaps a similar signaling incentive encourages financial bets.