19 Comments

I think the real lesson here is that no one should ever go to Disneyland.

Expand full comment

Isn't this just pretty much what Savage to said to Allais?

Expand full comment

pdf23ds: "This may be, but it doesn't mean the heuristics used to make those quick decisions don't cause bias, just that the bias caused isn't terribly maladaptive."

Using your definition of "bias", I'm making a much stronger claim: that trying to eliminate bias in these situations is positively maladaptive. (To clarify the definition I'm imputing to you: by your definition Deep Blue is more "biased" than a machine that makes perfect chess moves but takes several centuries to do so).

But the problem goes deeper. Researchers usually present a highly oversimplified analog of a real-life situation, to which subjects respond with their habits based on the far more complex real-life situation. Although the subject's heuristics, crucial for efficiently sovling the real-world problem, create "bias" when playing the abstract game, a far greater bias would occur if the researcher played the real-world game according to the rules optimal for the abstract game. Yet this is often exactly what the researchers conclude that we should do -- substitute their strategies, optimized for their abstract games, for the "biased" strategies of their subjects.

Expand full comment

it is important to make quick decisions even if the decisions are risky or possibly wrong

This may be, but it doesn't mean the heuristics used to make those quick decisions don't cause bias, just that the bias caused isn't terribly maladaptive.

Expand full comment

MV: "Since they don't simply respond with "I don't know", but instead use some other incorrect techniques to try to answer the questions they are given..."

This again is the evaluator's bias, not the sujbect's. In real life analogs of abstract experiments where this "bias" shows up, it is important to make quick decisions even if the decisions are risky or possibly wrong, either because the real-life analog is inherently probabilistic or indeterminate, or because the solution is not valuable enough to merit the cognitive investment needed to arrive at a precisely correct answer, or a combination of these factors.

Expand full comment

RH: "much prefer the standard in economics experiments, which is that you must warn subjects about the distribution of cases they will face,"

Even this is quite error-prone. People tend to act on habit, especially in social situations, and a short lesson in special game rules usually doesn't teach them to break those habits. They thus often play as if the closest analogous situations they have experienced in real life were the rules, rather than optimizing their play according to the abstract rules set up by the researchers.

An example of this is when players are anonymous, but many of the players act as if they were not anonymous, since the most analogous real-life experiences they've had to the game were not anonymous.

Then there are the experiments where the players are not anonymous, yet the economists expect them to play as if only their in-grame reputation, rather than their reputation with their fellow players when the experiment is over, is at stake. Here again it is the economists rather than the experimental subjects who are making the errrors.

Expand full comment

Eliezer, the point is that you can't just pick any "gotcha" environment and claim the results you get there are relevant for the "genuine real actual environment." You need to show that the environments you choose are typical of the real environments, of the environments they did expect, or the environments they should have expected.

Expand full comment

I can't back you on this one. Real life doesn't have convenient warning labels. Not every heuristic is a bias, but that's a question of what happens to you in your genuine real actual environment - the 21st century, not a hunter-gatherer tribe. It's certainly not a question of what happens when you're warned of the distribution.

Expand full comment

Thanks Peter de Blanc for the link on Zendo. The parts on Zen in the otherwise excellent "Godel, Escher, Bach" kind of annoyed me (what is a positivist to think of "Buddha-nature"?), but now I see why he did that. As a kid I and my family had similar games that went "I'm going to the beach and I'm bringing x but not y" or "Willy Nilly likes p but he doesn't like q".

Expand full comment

I'm so happy about this blog entry that I'm going to write a comment thanking you for making that necessary and important observation even though I have nothing to add.

Expand full comment

Robin, given enough trials for each individual, we can predict what distribution each individual expects. That's better data for their distribution than what they say their distribution is. They might not be willing to admit it, or they might not consciously know. But how they actually behave is usually a reliable estimator.

But what's it all about? Isn't the point to find better ways to make realistic distributions? if the distribution you expect is unrealistic, why would it matter whether you follow it faithfully or not?

Expand full comment

Adsgs, heuristics are not biases. I agree you can show that particular heuristics are being used, but to show that they also produce biases you need to look at a larger context.

Expand full comment

This is a classic objection to Kahneman and Tversky's experiments (in fact I think this is one that Gerd Gigerenzer is fond of)--one with a classic reply. You are missing the point of the multiplication experiment if you think that it is all about showing individuals are biased. The point was instead to demonstrate the reliance on a particular heuristic (in this case I believe it was anchoring and adjustment). Demonstrating that people have biases in a particular domain is useful because it sheds light on the processes people are actually using, not because it offers a "gotcha" moment for psychologists to call people biased.

Expand full comment

J, people don't always know the distribution, but they should always expect some distribution, and we can evaluate their behavior relative to the distribution they expect.

Expand full comment

Robin Hanson, thank you, that's an important point.

Of course you can never know whether a sequence is correct unless the originator tells you.

010101010101010101010101010101

Is the sequence really 01 repeated?

010101010111010101010111010101010111

Is the sequence 010101010111 repeated? Back when it looked like it was "01" we had 5 repetitions and now we have only 3.

In real life we don't know the distributions and can't tell them to the decision makers. So, if you suspect another nation might be about to make a sneak attack on yours, you can prevent it by making a sneak attack on them first. It's the only way to be sure. But this is the wrong thing to do if they actually were not going to attack you. How do you tell? The easy response is to say "I don't know.". But you still have to choose, based on whatever you have available. How do you correctly weight the data? Some of it is probably disinformation but you don't know which part is wrong.

In the examples discussed there are standard techniques that would have avoided the errors in question for the answers used and for all other possible answers.

Neat! Are there standard techniques that work in the real world? Are there standard techniques that work in politics?

Expand full comment

Robin: In the examples discussed there are standard techniques that would have avoided the errors in question for the answers used and for all other possible answers. The fact that people didn't use these techniques doesn't particularly speak badly of them as humans, any more than the fact that people aren't born knowing how to multiply, but both imply a sort of logical myopia that does speak badly of them as examples of minds in general. Since they don't simply respond with "I don't know", but instead use some other incorrect techniques to try to answer the questions they are given, and since those incorrect techniques predictably give a result that is wrong in a particular fashion, they are biased to produce wrong answers of that type. If they were unbiased, you could use the agreement theorem with them, as you could with the nickel, and reach a correct answer. I'm fairly confident that you can't actually do that. Failure on the Monty Hall problem or on the Wason Selection task is another good example of bias of this sort.

Expand full comment