Monthly Archives: February 2007

Medical Study Biases

Medical studies are seriously biased by interested funders and by tolerance for sloppy methods.  Here are four examples.

1.  A recent PLoS Medicine looked at 111 studies of soft drinks, juice, and milk that cited funding sources.

22% had all industry funding, 47% had no industry funding, and 32% had mixed funding. … the proportion with unfavorable [to industry] conclusions was 0% for all industry funding versus 37% for no industry funding .

Continue reading "Medical Study Biases" »

GD Star Rating
loading...
Tagged as:

This Is My Dataset. There Are Many Datasets Like It, but This One Is Mine. . .

Having read a huge number of studies on "happiness research" over the past year or so, I have concluded that the data is not very good and tells us little about happiness as most of us intuitively understand it. In fact, some of the problems with the data seem so damning, and so daunting, that it has become a matter of some surprise to me that more researchers don’t see the alleged problems as damning or daunting at all, and just proceed pretty much as usual. 

Now, maybe my analysis of the difficulties in measuring happiness with surveys (which I would be happy to share at some other time) is wrong. But even if I and other critics of the data are wrong, it appears that many of the best criticisms aren’t taken very seriously, even when they are duly noted. Indeed, I’ve noticed a tendency to bristle defensively at mention of problems with the data, or even at requests simply to be more precise in what it is that is being measured. "Don’t tell us we’re only really measuring dispositions to say certain things about happiness under various conditions! We don’t call it the Journal of Saying Things About Happiness Studies, now do we!" seems to be a fairly widespread attitude.  And there also seems to be a willingness to cite just about anything that superficially seems to support the validity of the measurement instrument — a sign of a kind of confirmation bias.

Now this is just my cumulative impression from reading a boatload of papers, and I’m not prepared to press this any further, or more specifically, with respect to happiness research, which isn’t the point of this post, anyway. The general question I want to raise concerns the the possible biases of social scientists when it comes to the quality of sets of data they have come to depend upon.

Here’s a plausible fictional narrative on a topic other than happiness. Let’s do it in the second person:

Continue reading "This Is My Dataset. There Are Many Datasets Like It, but This One Is Mine. . ." »

GD Star Rating
loading...
Tagged as:

Professors Progress Like Ads Advise

A system designed to advise a captive audience about the features and quality of available products would look a lot more like Consumer Reports than the world of advertising we see.  But this situation isn’t especially puzzling – we understand that neither those who make ads nor those who watch them have product information as their primary goal.   Ad makers want to sell, and ad watchers want to be entertained.   

Observers often have trouble, however, understanding how academia could consistently fail to achieve useful intellectual progress.  Since academia is such a decentralized competitive system, people figure that any failures to make progress must be the unavoidable error that appears in any system designed to explore the unknown.  Since we can’t know what we will discover until we discover it, complaints about progress are compared to second-guessing Monday-morning quarterbacks.   

But in fact, academia is no more about making useful intellectual progress than advertising is about informing consumers.  Professors seek prestigious careers, while funders and students seek prestige by association.  Academics talk and write primarily to signal their impressive mental abilities, such as their mastery of words, math, machines, or vast detail.  Yes, contributing to useful intellectual progress can sometimes appear impressive, but the correlation is weak, and it is often hard to see who really contributed how much.   Progress happens, but largely as a side effect.   

Continue reading "Professors Progress Like Ads Advise" »

GD Star Rating
loading...
Tagged as: ,

Disagreement Case Study 1

Robin asked a series of questions regarding case studies of disagreement. He didn’t get any public responses so I thought I would offer one experience I had.

I’ll try to answer his questions with regard to a topic where I have had a long-standing disagreement with a pretty smart guy, who undoubtedly knows more about that topic than I do. However his opinion, while shared with a vocal minority, is far outside the scientific mainstream belief. He now works as a professional advocate for his position, traveling all over the world giving talks, so he has very strong reasons to be biased, but we disagreed even before he took this job.

Continue reading "Disagreement Case Study 1" »

GD Star Rating
loading...
Tagged as:

Marginally Revolved Biases

Tyler Cowen at Marginal Revolution posts today on two biases:

  • We think we perform better in front of supportive audiences, but actually "we perform better in front of strangers or even a hostile crowd."
  • We rate people better if they share our birthday

And Tyler privately mentioned:

GD Star Rating
loading...
Tagged as:

Calibrate Your Ad Response

Imagine you are about to watch a car ad.  You now have expectations about various aspects of the car, including its reliability, comfort, acceleration, cool factor, and so on.  These all combine into your total estimate of how much the car would be worth to you.  After you watch the ad, your expectations about many aspects may change.  You may think it more cool and reliable, but less comfortable and slower.  Sometimes you will think the car is worth more, and some times less, than you thought before.

If you expect that watching a car ad will tend to make you like that car more, raising your car value estimate, you are biased!  You should adjust your reaction tendencies until you expect no average change in your value estimate.  It can be reasonable to react positively to the fact that a car company choose to show you a car ad, but only if you react negatively when they choose not to show you an ad.

This is a very general result: you should expect any piece of information to make zero average change in any estimate of yours.  This applies to any aspect of any product, applies to any kind of ad or pitch, and any kind of signal or or clue you might get about anything.   

Why would car companies show ads to well-adjusted ad watchers?  Because even if ads do not change average estimates, they can increase estimate variation.  If most people’s estimates are below the threshold for wanting to buy the car, then increasing estimate variation should increase the fraction of people who want the car enough to buy it.  If most people already think a product is good enough, however, its sellers should avoid showing variation-increasing ads to well-adjusted watchers. 

For a two-sided contest, such as a political race or legal trial, the tentative loser wants variation-increasing pitches, while the tentative winner avoids such things.  So, a side’s relative silence can signal its confidence in being a tentative winner.

GD Star Rating
loading...
Tagged as:

Disagreement Case Studies

I’d love to hear posts (on this blog or others) describing disagreement case studies.  That is, tell us about a specific disagreement (on a matter of fact not value) that you have with specific other reasonably respectable people, and tell us how you reconcile that disagreement with the irrationality of forseeing to disagree

You know their opinion, and they know your opinion, and yet you hold differing opinions.   You realize that both your opinion and theirs may result in part from defects, such as thinking errors or not knowing something that the other knows.  So:

  • Do you conclude just from the fact that they disagree that they must have more defects?
  • Do you think they realize that they can have defects, such as thinking errors or knowing less?
  • Should the fact that you disagree be a clue to them about their defects?   Is it a clue about yours? 
  • Do they adjust their estimates enough for the possibility of their defects?  If not, why not?
  • What clues suggest to you that they have more defects, or under-adjust for them?
  • What clues suggest to them that you have more defects, or under-adjust?
  • Do you both have access to these clues, and if so do you interpret them differently? 
  • Do you each realize some clues might be hidden?   
  • Does your inability to answer any of these questions suggest you have defects?
  • Consider all these questions again for your meta-disagreement about who has more defects.

Some people’s answers come down to "I just know I’m smarter; I have no reasons." I am writing a book on disagreement and want to include case studies like yours in my book.

Added:  If many are involved in your disagreement, consider these questions  about other people on all sides. If your case is interesting, I’m willing to interview you by phone to walk you through this line of questioning. 

GD Star Rating
loading...
Tagged as:

Just Lose Hope Already

Casey Serin, a 24-year-old web programmer with no prior experience in real estate, owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy 8 different houses in different states.  He took cash out of the mortgage (applied for larger amounts than the price of the house) and spent the money on living expenses and real-estate seminars.  He was expecting the market to go up, it seems.

That’s not even the sad part.  The sad part is that he still hasn’t given up.  Casey Serin does not accept defeat.  He refuses to declare bankruptcy, or get a job; he still thinks he can make it big in real estate.  He went on spending money on seminars.  He tried to take out a mortgage on a 9th house.  He hasn’t failed, you see, he’s just had a learning experience.

That’s what happens when you refuse to lose hope.

While this behavior may seem to be merely stupid, it also puts me in mind of two Nobel-Prize-winning economists…

Continue reading "Just Lose Hope Already" »

GD Star Rating
loading...

Less Biased Memories

Similar to the way posterity review could help academic incentives, a simple way to reduce bias about how we see our own lives is to collect more data on our lives.   From Marginal Revolution:

MyLifeBits has also provided Bell with a new suite of tools for capturing his interactions with other people and machines.  The system records his telephone calls and the programs playing on radio and television.  … stores a copy of every Web page he visits and a transcript of every instant message he sends or receives.  It also records the files he opens, the songs he plays and the searches he performs.  … MyLifeBits continually uploads his location from a portable Global Positioning System device, wirelessly transmitting the information to his archive.  … SenseCam, … automatically takes pictures when its sensors indicate that the user might want a photograph. 

How many of you would want this?  I wouldn’t.  I prefer the memories I choose to keep, and the ones I make up, over the ones I really had. 

Those who prefer unbiased memories should want this.  With a full record of your life, you could settle disputes about who said what when, and how often you do what.   

You don’t have to wait to record your full life in sound.  A $200 pocket voice recorder saves 150MB of high quality audio on a twelve hour battery charge, and a $200 hard disk will store three years of audio at that rate.   Of course it will be a few years until we can organize such data well.

GD Star Rating
loading...
Tagged as:

Think Frequencies, Not Probabilities

A new article at Behavioral and Brain Sciences reviews attempts to explain the following puzzle.  People do badly at questions worded this way:

The probability of breast cancer is 1% for a woman at age forty who participates in routine screening. If a woman has breast cancer, the probability is 80% that she will get a positive mammography. If a woman does not have breast cancer, the probability is 9.6% that she will also get a positive mammography. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer? __%

They do much better at questions worded this way:

10 out of every 1,000 women at age forty who participate in routine screening have breast cancer.  8 out of every 10 women with breast cancer will get a positive mammography.  95 out of every 990 women without breast cancer will also get a positive mammography.  Here is a new representative sample of women at age forty who got a positive mammography in routine screening.  How many of these women do you expect to actually have breast cancer?  ___ out of ___.

Whatever the explanation, the lesson should be clear: prefer to reason in terms of frequencies, instead of probabilities.  Thanks to Keith Henson for the pointer. 

GD Star Rating
loading...
Tagged as: