Monthly Archives: September 2007

Rationalization

Followup toThe Bottom Line, What Evidence Filtered Evidence?

In "The Bottom Line", I presented the dilemma of two boxes only one of which contains a diamond, with various signs and portents as evidence.  I dichotomized the curious inquirer and the clever arguer.  The curious inquirer writes down all the signs and portents, and processes them, and finally writes down "Therefore, I estimate an 85% probability that box B contains the diamond."  The clever arguer works for the highest bidder, and begins by writing, "Therefore, box B contains the diamond", and then selects favorable signs and portents to list on the lines above.

The first procedure is rationality.  The second procedure is generally known as "rationalization".

"Rationalization."  What a curious term.  I would call it a wrong word.  You cannot "rationalize" what is not already rational.  It is as if "lying" were called "truthization".

Continue reading "Rationalization" »

GD Star Rating
loading...

Lies About Sex

Over at Certain Doubts, Gregory Wheeler reviews our lies about sex:

In survey after survey within country after country men report having more heterosexual partners over their lifetime than women do, and as this article and this clarification point out, what people say in these surveys cannot be a reflection of what they do.

If these surveys were representative, we would expect that the average number of heterosexual partners reported by men in each sample to approximate the average number reported by women. Instead, the numbers aren’t even close. In Britain men report having an average of 12.7 hetrosexual partners over a lifetime whereas women report an average of 6.5; in France men report an average of 11.6 heterosexual partners, women 4.4; and in Germany men say 15.5, women 10.1.  …

But the interesting feature of Brown’s study was that he also asked the respondents to rate the truthfulness of their estimates. He found that 5% of men and 4% of women indicated that they thought their estimates were inaccurate, and 16% of men and 11% of women indicated that they knowingly misrepresented their counts. Still, even when these "self-incriminators" were removed from the sample population, there was still a significant discrepancy between the counts for men and women.

Yes, many are aware they lie about sex.  But it seems many others are not aware.  That suggests that you, yes you, do not really know how many sexual partners you have had!

Added: The movie Secrets and Lies has an ambiguous case: does she lie or self-deceive?

GD Star Rating
loading...
Tagged as: ,

What Evidence Filtered Evidence?

 Yesterday I discussed the dilemma of the clever arguer, hired to sell you a box that may or may not contain a diamond.  The clever arguer points out to you that the box has a blue stamp, and it is a valid known fact that diamond-containing boxes are more likely than empty boxes to bear a blue stamp.  What happens at this point, from a Bayesian perspective?  Must you helplessly update your probabilities, as the clever arguer wishes?

If you can look at the box yourself, you can add up all the signs yourself.  What if you can’t look?  What if the only evidence you have is the word of the clever arguer, who is legally constrained to make only true statements, but does not tell you everything he knows?  Each statement that he makes is valid evidence – how could you not update your probabilities?  Has it ceased to be true that, in such-and-such a proportion of Everett branches or Tegmark duplicates in which box B has a blue stamp, box B contains a diamond?  According to Jaynes, a Bayesian must always condition on all known evidence, on pain of paradox.  But then the clever arguer can make you believe anything he chooses, if there is a sufficient variety of signs to selectively report.  That doesn’t sound right.

Continue reading "What Evidence Filtered Evidence?" »

GD Star Rating
loading...

Jewish People and Israel

From David Bernstein at The Volokh Conspiracy:

"OUTRAGEOUS, IF TRUE: According to the Columbia Spectator, Barnard religion professor Alan Segal was asked by the university to provide a list of archeology experts to comment on the controversial tenure case of Nadia Abu El-Haj’s tenure–archeologists who ‘preferably’ were not Jewish. Segal quite properly refused, noting that religion ‘has nothing to do with what you say as a professional."

"El-Haj’s ‘scholarly’ work is premised on the idea that Jewish Israeli archeologists invented evidence of ancient Jewish settlement of the Land of Israel to justify Zionist claims to the land. Besides the issue of discrimination, which would be unthinkable in any other context related to any other group, the request to Segal seems like an implicit endorsement of her thesis, that Jewish archeologists cannot be trusted to be objective in their work related to Israel (which makes one wonder why the university would trust El-Haj, of Palestinian Arab origin, to be objective)."

If I were Nadia Abu El-Haj I would prefer, all else being equal, that Jewish people not be among those evaluating my scholarship for tenure.  So as not to be accused of anti-Semitism let me say that my mother and wife (although not my father or myself) are Jewish.  But based on my experience, Jewish people on average have a far more positive view towards Israel than non-Jewish people do.  El-Haj’s scholarship directly attacks Israel and so on average I would suspect that her scholarship would get a more favorable review from non-Jewish than Jewish archeologists.

In a world without bias the religion of El-Haj’s reviewers wouldn’t matter.  But we don’t live in such a world.  Given that this bias exists, it is rational to try to minimize the harm it might cause El-Haj.

Imagine that El-Haj’s research consisted of archeological evidence that she tried to use to disprove the historical accuracy of parts of the Koran.  Wouldn’t it be reasonable to try to avoid Islamic reviewers for her tenure case?

Religious beliefs often cause people to be bias towards those who attack such beliefs.  To deny this, or to assume that college professors are too professional to allow such bias to influence them, is silly.

GD Star Rating
loading...
Tagged as:

The Bottom Line

There are two sealed boxes up for auction, box A and box B.  One and only one of these boxes contains a valuable diamond.  There are all manner of signs and portents indicating whether a box contains a diamond; but I have no sign which I know to be perfectly reliable.  There is a blue stamp on one box, for example, and I know that boxes which contain diamonds are more likely than empty boxes to show a blue stamp.  Or one box has a shiny surface, and I have a suspicious – I am not sure – that no diamond-containing box is ever shiny.

Now suppose there is a clever arguer, holding a sheet of paper, and he says to the owners of box A and box B:  "Bid for my services, and whoever wins my services, I shall argue that their box contains the diamond, so that the box will receive a higher price."  So the box-owners bid, and box B’s owner bids higher, winning the services of the clever arguer.

The clever arguer begins to organize his thoughts.  First, he writes, "And therefore, box B contains the diamond!" at the bottom of his sheet of paper.  Then, at the top of the paper, he writes, "Box B shows a blue stamp," and beneath it, "Box A is shiny", and then, "Box B is lighter than box A", and so on through many signs and portents; yet the clever arguer neglects all those signs which might argue in favor of box A.  And then the clever arguer comes to me and recites from his sheet of paper:  "Box B shows a blue stamp, and box A is shiny," and so on, until he reaches:  "And therefore, box B contains the diamond."

Continue reading "The Bottom Line" »

GD Star Rating
loading...

False Findings, Unretracted

A recent Wall Street Journal article:

Dr. Ioannidis said "A new claim about a research finding is more likely to be false than true."  The hotter the field of research the more likely its published findings should be viewed skeptically, he determined. Take the discovery that the risk of disease may vary between men and women, depending on their genes.  Studies have prominently reported such sex differences for hypertension, schizophrenia and multiple sclerosis, as well as lung cancer and heart attacks. In research published last month in the Journal of the American Medical Association, Dr. Ioannidis and his colleagues analyzed 432 published research claims concerning gender and genes.  Upon closer scrutiny, almost none of them held up. Only one was replicated. …

His 2005 essay "Why Most Published Research Findings Are False" remains the most downloaded technical paper [at] the journal PLoS Medicine …  Another PLoS Medicine article … demonstrated that the likelihood of a published research result being true increases when that finding has been repeatedly replicated in multiple studies. … Earlier this year, informatics expert Murat Cokol and his colleagues at Columbia University sorted through 9.4 million research papers at the U.S. National Library of Medicine published from 1950 through 2004 in 4,000 journals. By raw count, just 596 had been formally retracted, Dr. Cokol reported.

Is anyone still surprised to hear these things?   Hat tip to Giancarlo Ibargaen.

GD Star Rating
loading...
Tagged as:

How to Convince Me That 2 + 2 = 3

In "What is Evidence?", I wrote:

This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise.  If your retina ended up in the same state regardless of what light entered it, you would be blind…  Hence the phrase, "blind faith".  If what you believe doesn’t depend on what you see, you’ve been blinded as effectively as by poking out your eyeballs.

Cihan Baran replied:

I can not conceive of a situation that would make 2+2 = 4 false. Perhaps for that reason, my belief in 2+2=4 is unconditional.

Continue reading "How to Convince Me That 2 + 2 = 3" »

GD Star Rating
loading...

Elusive Conflict Experts

Recently published in Interfaces:

[Regarding] the decisions that adversaries will make, we compared the accuracy of 106 forecasts by experts [e.g., domain experts, conflict experts, and forecasting experts] and 169 forecasts by novices about [choices in] eight real conflicts. The forecasts of experts who used their unaided judgment were little better than those of novices, and neither group’s forecasts were much better than simply guessing. The forecasts of experts with more experience were no more accurate than those with less. The experts were nevertheless confident in the accuracy of their forecasts. … We obtained 89 sets of frequencies from novices instructed to assume there were 100 similar situations. Forecasts based on the frequencies were no more accurate than 96 forecasts from novices asked to pick the single most likely decision.

Maybe conflict games are full of mixed strategies?  Hat Tip to WSJ Online, via Tyler Cowen.

GD Star Rating
loading...
Tagged as:

9/26 is Petrov Day

(Posted last year, but bumped to the top for today.)

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983.  Wherever you are, whatever you’re doing, take a minute to not destroy the world.

The story begins on September 1st, 1983, when Soviet jet interceptors shot down a Korean Air Lines civilian airliner after the aircraft crossed into Soviet airspace and then, for reasons still unknown, failed to respond to radio hails.  269 passengers and crew died, including US Congressman Lawrence McDonald.  Ronald Reagan called it "barbarism", "inhuman brutality", "a crime against humanity that must never be forgotten".  Note that this was already a very, very poor time for US/USSR relations.  Andropov, the ailing Soviet leader, was half-convinced the US was planning a first strike.  The KGB sent a flash message to its operatives warning them to prepare for possible nuclear war.

On September 26th, 1983, Lieutenant Colonel Stanislav Yevgrafovich Petrov was the officer on duty when the warning system reported a US missile launch.  Petrov kept calm, suspecting a computer error.

Then the system reported another US missile launch.

And another, and another, and another.

Continue reading "9/26 is Petrov Day" »

GD Star Rating
loading...

Drinking Our Own Kool-Aid

A firm that uses prediction markets to get info from its employees to forecast sales or deadline success is in essence saying:

Don’t just give us cheap talk for a flat fee; signal your confidence by risking a loss if you are wrong, in return for a gain if you are right.

Salesman and consultants like myself who try to convince firms to adopt prediction markets have so far mainly sought a flat fee for their advice or software.   But to drink our own Kool Aid we should say:

Pick some parameters that you estimate now, but where you want more accurate estimates.  Show us the track record of your current estimation process and then declare two values:

  • Your dollar value for more accurate estimates of these parameters.
  • Your dollar value of time for employees who might join our markets.

Then you and we will together track the time your employees spend in our markets, and judge the relative accuracy of our new process, compared to your current process.  Our fee will just be your declared value for the improved accuracy we achieved, minus your declared value for the employee time we used.  (We might have to pay you.)  Deal?

Added: We’ll help you look for good candidate parameters of course. 

GD Star Rating
loading...
Tagged as: