Category Archives: Standard Biases

The linear-scaling error

Say your first car got 10 mpg, and you replaced it with a 20 mpg car.  Now you’re ready to get another car.  How many mpg will your new car need to get, to be as much of an improvement over your last car (gas-wise), as that car was over your first car?

A recent Science article, summarized here, reports on this as an instance of a simple yet subtle bias: When given information, people assume that the effects relevant to them scale linearly with the measurement scale used. In this instance, it’s miles per gallon.

If this were wartime, and you were rationed 10 gallons per week, the measurement of interest to you in evaluating a car’s mileage might be the number of different places you could visit once a week with that car.  Then the relevant statistic would be (miles/gallon)2.  But since we aren’t rationing gas, a better measurement is gallons per mile, which can be translated into dollars and environmental impact per mile.

When people are given figures in miles per gallon, they usually think that the answer to the above question is 30 mpg.  "Sixty percent of participants ordered the pairs according to linear improvement and 1% according to actual improvement. A third strategy, proportional improvement, was used by 10% of participants."  (The proportional strategy says that the answer is 40 mpg.)

People get the right answer when you rephrase the question in units that scale linearly with the effect.  Try this:  Your first car could go 100 miles on 10 gallons of gas.  Your second car could go 100 miles on 5 gallons of gas.  Your third car needs to go 100 miles on… 0 gallons of gas.  So it needs to get infinite mpg, to match the improvement in going from 10 to 20 mpg.

GD Star Rating
loading...
Tagged as:

The Complete Idiot’s Guide to Ad Hominem

Stephen Bond writes the definitive word on ad hominem in "the ad hominem fallacy fallacy":

In reality, ad hominem is unrelated to sarcasm or personal abuse.  Argumentum ad hominem is the logical fallacy of attempting to undermine a speaker’s argument by attacking the speaker instead of addressing the argument.  The mere presence of a personal attack does not indicate ad hominem: the attack must be used for the purpose of undermining the argument, or otherwise the logical fallacy isn’t there.

[…]

A: "All rodents are mammals, but a weasel isn’t a rodent, so it can’t be a mammal."
B: "You evidently know nothing about logic. This does not logically follow."

B’s argument is still not ad hominem.  B does not imply that A’s sentence does not logically follow because A knows nothing about logic.  B is still addressing the substance of A’s argument…

This is too beautiful, thorough, and precise to not post.  HT to sfk on HN.

GD Star Rating
loading...

Lawful Uncertainty

Previously in seriesLawful Creativity

From Robyn Dawes, Rational Choice in an Uncertain World:

"Many psychological experiments were conducted in the late 1950s and early 1960s in which subjects were asked to predict the outcome of an event that had a random component but yet had base-rate predictability – for example, subjects were asked to predict whether the next card the experiment turned over would be red or blue in a context in which 70% of the cards were blue, but in which the sequence of red and blue cards was totally random.

In such a situation, the strategy that will yield the highest proportion of success is to predict the more common event.  For example, if 70% of the cards are blue, then predicting blue on every trial yields a 70% success rate.

What subjects tended to do instead, however, was match probabilities – that is, predict the more probable event with the relative frequency with which it occurred.  For example, subjects tended to predict 70% of the time that the blue card would occur and 30% of the time that the red card would occur.  Such a strategy yields a 58% success rate, because the subjects are correct 70% of the time when the blue card occurs (which happens with probability .70) and 30% of the time when the red card occurs (which happens with probability .30); .70 * .70 + .30 * .30 = .58.

In fact, subjects predict the more frequent event with a slightly higher probability than that with which it occurs, but do not come close to predicting its occurrence 100% of the time, even when they are paid for the accuracy of their predictions…  For example, subjects who were paid a nickel for each correct prediction over a thousand trials… predicted [the more common event] 76% of the time."

(Dawes cites:  Tversky, A. and Edwards, W.  1966.  Information versus reward in binary choice.  Journal of Experimental Psychology, 71, 680-683.)

Do not think that this experiment is about a minor flaw in gambling strategies.  It compactly illustrates the most important idea in all of rationality.

Continue reading "Lawful Uncertainty" »

GD Star Rating
loading...

Beware “I Believe”

I believe in trusting my intuition. 
I believe children are our future.
I believe Jesus will come again.
I believe humanity won’t survive the century.
I believe sex is beautiful and natural.
I believe myth is more potent than history.
I believe I can do anything I set my mind to.
I believe everyone deserves a second chance.

Why say "I believe X" instead of just saying X?  After all, we typically claim to believe most what we say.  Sometimes "I believe X" indicates you are especially tentative and open to persuasion about X, but that doesn’t seem to cover the above examples, nor the famous "This I believe" essays, on authors’ "rules they live by, the things they have found to be the basic values in their lives" and "the core values and beliefs that guide their daily lives."

These examples seem to be 1) clear value statements, 2) obvious truths few would dispute, which seem to represent values, and 3) controversial factual claims.  They all seem to indicate a strong emotional attachment, which might be can be fine for values, but is a rationality no-no for factual beliefs, especially controversial ones. 

If you feel tempted to say "I believe X" and can feel your emotions swell with the evil pleasure of attachment via belief, watch out!  Beware that road to rationality ruin. 

Added:  People rarely use "I value X" as a roundabout way to express a factual belief.  So their frequently saying "I believe X" as a way to express values seems to me further evidence that people often see values not as irreducible differing preferences, but as conditional values that we would share were it not for differing fact-like beliefs.  That is, we can imagine possible worlds in which the other values would make sense, but we believe we are not in those worlds.

GD Star Rating
loading...
Tagged as:

Gullibility and Control

Science Friday highlighted research suggesting we jump more to conclusions when we feel out of control:

In situations in which a person is not in control, they’re more likely to spot patterns where none exist, see illusions, and believe in conspiracy theories. In a series of experiments, researchers created situations in which people had less control over their situation, and then tested how likely the participants were to see imaginary images embedded in snowy pictures. The researchers also had participants write about either a situation in which they had control, or a situation in which they didn’t, and then presented stories involving strange coincidences. People who had written about a situation in which they were not in control were more likely to draw non-existent connections between the coincidences, the researchers found.

This summary suggests out-of-control-feeling folks are biased to see more than there is, but perhaps in-control-feeling folks are biased to see less than there is.   

GD Star Rating
loading...
Tagged as:

Correcting Biases Once You’ve Identified Them

Most of the discussion on this blog seems to focus on figuring out how to identify biases.  We implicitly assume that this is the hard part; that biases can be really sneaky and hard to ferret out, but that once you’ve identified a bias, correcting it is pretty straightforward and mechanical.  If you’ve figured out that you have a bias that causes you to systematically overestimate the probability of a particular kind of event happening by .2, you simply subtract .2 from future estimates (or whatever).  But it seems to me that actually correcting a bias can be pretty hard even once it’s been identified.  For example, I have a tendency to swing a bit too late at a (slow-pitch) softball.  I’m sure this bias could be at least partially corrected with effort, but it is definitely not simply a matter of saying to myself: "swing .5 seconds sooner than you feel like you should swing."  That just can’t be done in real time without screwing up the other mechanics of the swing.

I think this is also a problem for more consequential matters  In real decision-making situations, where there are elements of the problem that need attention besides the (already identified) bias, it is not going to be a trivial matter to fix the bias without screwing up some other part of the problem even worse.  I’m not sure this is the right way to put it, but it seems like OB engineering is a seperate and important discipline distinct from OB science.

GD Star Rating
loading...
Tagged as:

Use the Native Architecture

Imagine writing two versions of the same computer program. The first represents its integers as 32-bit binary numbers.  The second writes the numbers in base 10, ASCII strings with each byte used to store one digit.

The second version has its upsides.  Thirty-two bit numbers max out at several billion, but you can keep tacking digits onto the string until you’re out of memory.

That said, the program that uses 32-bit integers runs faster because it uses the native architecture of the CPU.  The CPU was designed with this more compact format for numbers in mind, with special-purpose circuits like 32 bit adders.

The same principle applies to using one’s brain:  Some things the brain can do quickly and intuitively, and some things the brain has to emulate using many more of the brain’s native operations.  Sometimes thinking in metaphors is a good idea, if you’re human.

In particular, visualizing things is part of the brain’s native architecture, but abstract symbolic manipulation has to be learned.  Thus, visualizing mathematics is usually a good idea.

When was the last time you made a sign error?

When was the last time you visualized something upside-down by mistake?

I thought so.

Continue reading "Use the Native Architecture" »

GD Star Rating
loading...
Tagged as:

Beauty Bias

In a TV game show, pretty contestants were not better or more cooperative players, but other contestants seemed to act as if they were:

It’s an uncomfortable truth that beautiful people make more money. … Now a study of a TV game show supports the prejudice hypothesis. … V. Bhaskar … analysed 69 episodes of Shafted … At the end of a round, the highest-scoring player picks a contestant to eliminate. Although the least attractive players scored no worse in the show than others, they were twice as likely to be eliminated in the first round. The contestants did not seem to base their decision on other factors such as age or sex. …

Contestants also confused attractiveness with cooperativeness. In the final round of Shafted, the last two players vie for an accumulated pot of money. Each player must opt to share the prize or attempt to grab it all for themselves. If one player opts to grab while one opts to share, the grabber takes the lot. If both try to grab, they both leave empty-handed, so game theory dictates that the leading contestant should pick a fellow finalist who is likely to cooperate. 

Even though attractiveness was found to have no bearing on cooperativeness, the leader often elected to play the final round with the most attractive of their remaining rivals. In 13 shows, these looks-based decisions even overrode a simple imperative to choose their highest-scoring rival, which would have led to an increases in the ultimate prize fund. In these cases, the prize was E350 lower than it could have been, on average.

It also seems we think everyone is prettier when we are tipsy:

Researchers … randomly assigned 84 heterosexal students to consume either a non-alcoholic lime-flavoured drink or an alcoholic beverage with a similar flavour. … After 15 minutes, the students were shown pictures of people their own age, from both sexes.  Both men and women who had consumed alcohol rated the faces as being more attractive than did the controls … The effect was not limited to the opposite sex – volunteers who had drunk alcohol also rated people from their own sex as more attractive.

So, do we think everyone is better and more cooperative when we are tipsy?

GD Star Rating
loading...
Tagged as:

Dark Dreams

Here’s another reason to prefer reality over dreams; dreams are darker:

We collected dream reports (N=419) and daily event logs (N=490) from 39 university students during a two-week period, and interviewed them about real threat experiences retrievable from autobiographical memory (N=714). Threat experiences proved to be much more frequent and severe in dreams than in real life, and Current Dream Threats more closely resembled Past than Current Real Threats.

If someday we have tech to suppress dreams (or at least memories of them), will it be considered cruel to allow your kids to dream?  HT to Tyler.

GD Star Rating
loading...
Tagged as:

Bias in Real Life: A Personal Story

All too often, I, like all too many Americans, will walk into a fast food joint.  As is well known, the fast food industry has, for a good number of years now, been pushing combination meals — a single order will purchase a main course (classically, burger), a side order (fries) and a drink (coke).  As is also well known (pdf), people respond to cues like this in judging how much to consume — if something is packaged as a meal, we process it as a meal.  (In case that link doesn’t work, it’s to Brian Wansink & Koert van Ittersum, "Portion Size Me: Downsizing Our Consumption Norms" Journal of the American Dietetic Association 107:1103-1106 (2007).)

All this stuff is old news.  But, I wouldn’t expect myself to fall for it (which is the point of this post: I did).  I’m a pretty cynical and suspicious guy, a cynicism and suspicion that rises to almost downright paranoia when it comes to marketing.  (I’ve been known to heavily discount a health threat the moment someone starts selling a product to protect against it, for example.)  I flatter myself by thinking I’m somewhat intelligent.  And I’m well aware of the above research. 

Yet every few weeks until today, I’d walk into a Taco Bell and order one of those combo meals.  This is so even though I often don’t particularly want one of the items on the combo — I’m usually fairly indifferent between, say, having a soda and just drinking water.  Since water’s free and soda isn’t, rationally, I should just drink water every time.  So why do I order the combo meal?  Well, it’s in a combo meal — presumably, it’s cheaper than buying the items separately.  I’m saving money!*  Or, at least, this is the rationalization my brain would supply, on a level just below consciousness except on those rare, fleeting, and unproductive moments when I’d bother to think before ordering.**

Recently, in order to live a little healthier, I made a firm decision to stop consuming sodas.  So it was actually easy to figure out how much I was "saving" by ordering the combo meal instead of all three items.***

Guess how much I saved.  Go ahead.  Guess.  In the comments, even, if you want (status points to the first person who gets it right).  Highlight the space between the brackets to see, after you’ve guessed. 

[Combo meal savings over ordering all three items separately: $0.08.  Extra combo meal cost over ordering just the two items I wanted: $1.61]

I fell for this kind of stupidity even though I know the research.  Do you? 

I really think this bears emphasis.  I know this research really well, and I have known it for over a decade.  If they can get me, they can get anyone.  Everyone, even serious experts, even the guy who largely invented the study of these common biases, can fall prey to this kind of thing.  Dare you think you’re exempt? 

Do you think maybe this contributes to our obesity problem? Or do you still think that overeating can casually be described as a "free choice" for which people are personally responsible?  (While Taco Bell profits from selling unwanted sodas…)

Policy message: if even informed people can be suckered like this, maybe it is time for a legislative solution

Continue reading "Bias in Real Life: A Personal Story" »

GD Star Rating
loading...
Tagged as: