Gotchas Are Not Biases

As a kid I had a trick nickel from Disneyland’s magic shop – you were supposed to ask someone to look at your nickel, then push the backside to squirt them.  Now we might wonder about someone who fell for this trick more than once, but surely it doesn’t make sense to call someone "biased" who fell for it once.  Even if you tried the trick nickel on a hundred people, and showed that over ninety percent of them got a wet eye, you wouldn’t have shown people are biased about wet nickels. 

I mention this because I get a similar uneasy feeling about many of the studies people use to show that people are "biased."  For example, Eliezer had a post Aug. 27 about people concluding too soon that they had guessed the rule that governs a sequence, and a post Sept. 7 about people whose fast estimate of a product of eight numbers was too low.   In his 2006 book Infotopia, Cass Sunstein gives similar examples, such as where groups are said to put too much weight on info held by many group members, relative to info held by just one member. 

In all these cases you take people uncertain about which particular case they face, you choose a particular case for them to deal with, and then you show that their response is not perfect for this case.  Like with the trick nickel.  And of course while it is easy to imagine how their response might have been better, what you don’t see is how their response would deal with all the other cases they thought were possible.

I much prefer the standard in economics experiments, which is that you must warn subjects about the distribution of cases they will face, and then you must report on the results when you actually sample from that distribution of cases.  If your subject’s responses still seem inadequate, you have a much stronger case that you have found a bias.   

Added: More generally, experimenters should show that their environments are typical of real ones, ones subjects expected, or ones subjects should have expected.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Bruce Britton

    Robin, could you give an example of what a standard economics experiment would look like, when it is testing for a bias? I’m having difficulty seeing what you mean, seeing the difference between the experiments described by Eliazer and those you have in mind. For example, for the two biases Eliazer posted on, what would be the design of a standard economics experiment?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Bruce, in this experiment we tested for whether Manipulators biased the price of prediction markets. To test if people guess a sequence rule too fast, tell them the distribution of rules they will face and then repeatedly draw from that distribution, each time seeing how long until they thought they had found the rule.

  • http://www.spaceandgames.com Peter de Blanc

    Robin, I think the point of Eli’s post was not that people guess too quickly, but that they only look for positive examples. In the experiment described, it would have been impossible for the subjects to disprove their hypotheses if they had continued to construct positive examples, even if they had continued for hours.

    Have you ever played a game of Zendo? Positive bias is very, very apparent.

  • michael vassar

    Robin: In the examples discussed there are standard techniques that would have avoided the errors in question for the answers used and for all other possible answers. The fact that people didn’t use these techniques doesn’t particularly speak badly of them as humans, any more than the fact that people aren’t born knowing how to multiply, but both imply a sort of logical myopia that does speak badly of them as examples of minds in general. Since they don’t simply respond with “I don’t know”, but instead use some other incorrect techniques to try to answer the questions they are given, and since those incorrect techniques predictably give a result that is wrong in a particular fashion, they are biased to produce wrong answers of that type. If they were unbiased, you could use the agreement theorem with them, as you could with the nickel, and reach a correct answer. I’m fairly confident that you can’t actually do that. Failure on the Monty Hall problem or on the Wason Selection task is another good example of bias of this sort.

  • J Thomas

    Robin Hanson, thank you, that’s an important point.

    Of course you can never know whether a sequence is correct unless the originator tells you.

    01
    0101
    010101
    01010101
    0101010101

    Is the sequence really 01 repeated?

    010101010111010101010111010101010111

    Is the sequence 010101010111 repeated? Back when it looked like it was “01” we had 5 repetitions and now we have only 3.

    In real life we don’t know the distributions and can’t tell them to the decision makers. So, if you suspect another nation might be about to make a sneak attack on yours, you can prevent it by making a sneak attack on them first. It’s the only way to be sure. But this is the wrong thing to do if they actually were not going to attack you. How do you tell? The easy response is to say “I don’t know.”. But you still have to choose, based on whatever you have available. How do you correctly weight the data? Some of it is probably disinformation but you don’t know which part is wrong.

    In the examples discussed there are standard techniques that would have avoided the errors in question for the answers used and for all other possible answers.

    Neat! Are there standard techniques that work in the real world? Are there standard techniques that work in politics?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    J, people don’t always know the distribution, but they should always expect some distribution, and we can evaluate their behavior relative to the distribution they expect.

  • a decision science graduate student

    This is a classic objection to Kahneman and Tversky’s experiments (in fact I think this is one that Gerd Gigerenzer is fond of)–one with a classic reply. You are missing the point of the multiplication experiment if you think that it is all about showing individuals are biased. The point was instead to demonstrate the reliance on a particular heuristic (in this case I believe it was anchoring and adjustment). Demonstrating that people have biases in a particular domain is useful because it sheds light on the processes people are actually using, not because it offers a “gotcha” moment for psychologists to call people biased.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Adsgs, heuristics are not biases. I agree you can show that particular heuristics are being used, but to show that they also produce biases you need to look at a larger context.

  • J Thomas

    Robin, given enough trials for each individual, we can predict what distribution each individual expects. That’s better data for their distribution than what they say their distribution is. They might not be willing to admit it, or they might not consciously know. But how they actually behave is usually a reliable estimator.

    But what’s it all about? Isn’t the point to find better ways to make realistic distributions? if the distribution you expect is unrealistic, why would it matter whether you follow it faithfully or not?

  • Constant

    I’m so happy about this blog entry that I’m going to write a comment thanking you for making that necessary and important observation even though I have nothing to add.

  • http://entitledtoanopinion.wordpress.com/ TGGP

    Thanks Peter de Blanc for the link on Zendo. The parts on Zen in the otherwise excellent “Godel, Escher, Bach” kind of annoyed me (what is a positivist to think of “Buddha-nature”?), but now I see why he did that. As a kid I and my family had similar games that went “I’m going to the beach and I’m bringing x but not y” or “Willy Nilly likes p but he doesn’t like q”.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I can’t back you on this one. Real life doesn’t have convenient warning labels. Not every heuristic is a bias, but that’s a question of what happens to you in your genuine real actual environment – the 21st century, not a hunter-gatherer tribe. It’s certainly not a question of what happens when you’re warned of the distribution.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, the point is that you can’t just pick any “gotcha” environment and claim the results you get there are relevant for the “genuine real actual environment.” You need to show that the environments you choose are typical of the real environments, of the environments they did expect, or the environments they should have expected.

  • nick

    RH: “much prefer the standard in economics experiments, which is that you must warn subjects about the distribution of cases they will face,”

    Even this is quite error-prone. People tend to act on habit, especially in social situations, and a short lesson in special game rules usually doesn’t teach them to break those habits. They thus often play as if the closest analogous situations they have experienced in real life were the rules, rather than optimizing their play according to the abstract rules set up by the researchers.

    An example of this is when players are anonymous, but many of the players act as if they were not anonymous, since the most analogous real-life experiences they’ve had to the game were not anonymous.

    Then there are the experiments where the players are not anonymous, yet the economists expect them to play as if only their in-grame reputation, rather than their reputation with their fellow players when the experiment is over, is at stake. Here again it is the economists rather than the experimental subjects who are making the errrors.

  • nick

    MV: “Since they don’t simply respond with “I don’t know”, but instead use some other incorrect techniques to try to answer the questions they are given…”

    This again is the evaluator’s bias, not the sujbect’s. In real life analogs of abstract experiments where this “bias” shows up, it is important to make quick decisions even if the decisions are risky or possibly wrong, either because the real-life analog is inherently probabilistic or indeterminate, or because the solution is not valuable enough to merit the cognitive investment needed to arrive at a precisely correct answer, or a combination of these factors.

  • http://pdf23ds.net pdf23ds

    it is important to make quick decisions even if the decisions are risky or possibly wrong

    This may be, but it doesn’t mean the heuristics used to make those quick decisions don’t cause bias, just that the bias caused isn’t terribly maladaptive.

  • nick

    pdf23ds: “This may be, but it doesn’t mean the heuristics used to make those quick decisions don’t cause bias, just that the bias caused isn’t terribly maladaptive.”

    Using your definition of “bias”, I’m making a much stronger claim: that trying to eliminate bias in these situations is positively maladaptive. (To clarify the definition I’m imputing to you: by your definition Deep Blue is more “biased” than a machine that makes perfect chess moves but takes several centuries to do so).

    But the problem goes deeper. Researchers usually present a highly oversimplified analog of a real-life situation, to which subjects respond with their habits based on the far more complex real-life situation. Although the subject’s heuristics, crucial for efficiently sovling the real-world problem, create “bias” when playing the abstract game, a far greater bias would occur if the researcher played the real-world game according to the rules optimal for the abstract game. Yet this is often exactly what the researchers conclude that we should do — substitute their strategies, optimized for their abstract games, for the “biased” strategies of their subjects.

  • http://notsneaky.blogspot.com notsneaky

    Isn’t this just pretty much what Savage to said to Allais?

  • David J. Balan

    I think the real lesson here is that no one should ever go to Disneyland.