Category Archives: Statistics

You Are Not Hiring the Top 1%

Today’s statistical fallacy (slightly redacted by editor) comes from Joel on Software:

Everyone thinks they’re hiring the top 1%.  Martin Fowler said, “We are still working hard to hire only the very top fraction of software developers (the target is around the top 0.5 to 1%).”  I hear this from almost every software company. "We hire the top 1% or less," they all say.  Could they all be hiring the top 1%? Where are all the other 99%? General Motors?

When you get 200 resumes, and hire the best person, does that mean you’re hiring the top 0.5%?  Think about what happens to the other 199 that you didn’t hire.  They go look for another job.

Continue reading "You Are Not Hiring the Top 1%" »

GD Star Rating

Truth is stranger than fiction

Robin asks the following question here:

How does the distribution of truth compare to the distribution of opinion?  That is, consider some spectrum of possible answers, like the point difference in a game, or the sea level rise in the next century. On each such spectrum we could get a distribution of (point-estimate) opinions, and in the end a truth.  So in each such case we could ask for truth’s opinion-rank: what fraction of opinions were less than the truth?  For example, if 30% of estimates were below the truth (and 70% above), the opinion-rank of truth was 30%.

If we look at lots of cases in some topic area, we should be able to collect a distribution for truth’s opinion-rank, and so answer the interesting question: in this topic area, does the truth tend to be in the middle or the tails of the opinion distribution?  That is, if truth usually has an opinion rank between 40% and 60%, then in a sense the middle conformist people are usually right.  But if the opinion-rank of truth is usually below 10% or above 90%, then in a sense the extremists are usually right.

My response:

1.  As Robin notes, this is ultimately an empirical question which could be answered by collecting a lot of data on forecasts/estimates and true values.

2.  However, there is a simple theoretical argument that suggests that truth will be, generally, more extreme than point estimates, that the opinion-rank (as defined above) will have a distribution that is more concentrated at the extremes as compared to a uniform distribution.

The argument goes as follows:

Continue reading "Truth is stranger than fiction" »

GD Star Rating
Tagged as: , ,

Sick of Textbook Errors

One of the most well-worn examples in introductions to Bayesian reasoning is testing for rare diseases: if the prior probability that a patient has a disease is sufficiently low, the probability that the patient has the disease conditional on a positive diagnostic test result may also be low, even for very accurate tests. One might hope that every epidemiologist would be familiar with this textbook problem, but this New York Times story suggests otherwise:

For months, nearly everyone involved thought the medical center had had a huge whooping cough outbreak, with extensive ramifications. […]

Then, about eight months later, health care workers were dumbfounded to receive an e-mail message from the hospital administration informing them that the whole thing was a false alarm.

Now, as they look back on the episode, epidemiologists and infectious disease specialists say the problem was that they placed too much faith in a quick and highly sensitive molecular test that led them astray.

While medical professionals can modestly improve their performance on inventories of cognitive bias when coached, we should not overestimate the extent to which formal instruction such as statistics or epidemiology classes will improve actual behavior in the field.

GD Star Rating
Tagged as: , ,

Symmetry Is Not Pretty

From Chatty Apes we learn that symmetry has little to do with whether a face is attractive:

Measurable symmetry accounts for less than 1% of the variance in the attractiveness of women’s faces and less than 3% of the variance of the attractiveness of men’s faces.  … the initial studies showing big effects typically involved samples of less than 20 faces each, which is irresponsibly small for correlational studies with open-ended variables.  Once the bigger samples starting showing up, the effect basically disappeared for women and was shown to be pretty low for men.  But no one believed the later, bigger studies, even most of their own authors — pretty much everyone in my business still thinks that symmetry is a big deal in attractiveness.  So, the first lesson I learned:  Small samples are …  My solution has been to ditch the old p<.05 significance standard.

I see the same thing in health economics; once people see some data supporting a  theory that makes sense to them, they neglect larger contrary data.   

GD Star Rating
Tagged as: ,

Malatesta Estimator

We frequently encounter competing estimates of politically salient magnitudes. One example would be the number of attendees at the 1995 “Million Man March”.  Obviously, frequently the estimates emanate from biased observers seeking to create or dispel an impression of strength.  Someone interested in generating a more neutral estimate might consider applying what I would call the Malatesta Estimator, which I have named after its formulator, the 14th Century Italian mercenary captain, Galeotto Malatesta of Rimini (d. abt. 1385). His advice was: “Take the mean between the maximum given by the exaggerators, and the minimum by detractors, and deduct a third” (Saunders 2004).  This simplifies into: the sum of the maximum and the minimum, divided by three.  It adjusts for the fact that the minimum is bounded below by zero, while there is no bound on the maximum.  Of course, it only works if the maximum is at least double the minimum.

In the case of the Million Man March, supporters from the Nation of Islam claimed attendance of 1.5 to 2 million.  The Park Service suggested initially that 400,000 had participated.  The Malatesta Estimator therefore yields an estimate of 800,000.  We can calibrate this by comparing it with an estimate by Dr. Farouk El-Baz and his team at the Boston University Remote Sensing Lab.  Dr. El-Baz and his team used samples of 1 meter square pixels from a number of overhead photos to estimate the density per pixel, and then calculated an estimate for the entire area.  Their estimate was 837,000, with 20% error bounds giving a range from 1 million to 670,000.

Saunders, Frances Stonor. 2004. The Devil’s Broker: Seeking Gold, God, and Glory in Fourteenth-Century Italy. (New York: HarperCollins), p. 93.

BU Remote Sensing Lab Press Release:

Accessed 14 December 2006.

GD Star Rating
Tagged as: , ,