A well-connected reporter (who I promised I’d keep anonymous) just told me that a major Washington media organization started a project studying major media pundits, and a big part of this project was assessing individual pundit forecast track records. After several months of several folks working on the project, it was killed, supposedly because management decided readers don’t care as much about pundit accuracy as they’d previous thought.
I'm beginning to think this is the problem with philosophy, as well.
Science is often described as an agreed upon objective method for determining who is right.
As far as I can tell, the common complaint about philosophical debates being 'endless' is true - philosophers haven't even tried to agree upon a method for coming to agreement. (Thus, it only happens by chance.)
Could it be because most affiliate with a philosopher to leech status and signal various values, not because they think he might be correct? Extending this, could most logic use by philosophers be because logic is fashionable, not because they want logic to constrain their thoughts?
Lucky that scientists think an objective method is fashionable.
See the work of Phil Tetlock on this subject:
See the Hamilton study linked above by David. Liberals predict slightly better than conservatives. Lawyers predict worse than non-lawyers. Overall, the total set of examined pundits does no better than chance.
If memory serves, Rush Limbaugh says he's been independently audited and found to have an accuracy of around 0.99 (99%). Of course that's for factual statements, not predictions. I haven't investigated the measurement technique and can't speak for the quality of the result.
Both Limbaugh and Glenn Beck often boast about the accuracy of certain individual predictions, but of course that's anecdotal and must suffer from strong selection biases.
The results of a carefully controlled study would be enlightening. If it included pundits across the political spectrum, it might give some insight into what world-view is closest to the truth.
Maybe 'accuracy' was too difficult to determine for a significant number of cases. And then given the topic at hand, perhaps they were extra careful that the article they were researching was accurate.
The direct link:http://www.hamilton.edu/new...
Hmmm interesting, society does do strange things and how it functions always changes.
Paul Krugman had a post about a study like this that was actually published and he links to a copy.
It is perhaps understandable, given the results, why certain major Washington media organizations would not want to actually look at the data.
it's like complaining that America's Got Talent isn't very good at picking genuinely talented mega-stars. It's not about that: it's about creating a TV programme people like to watch and spend money on, and sells adverts. If the winner also turns out to be successful that's a welcome bonus.
newspaper punditry isn't about accurate forecasts, it's about creating an entertaining newspaper that people like to read ad spend money on, and sells adverts. If the forecasts turn out to be good, that's merely a welcome bonus.
I would have thought it was killed after discovering that pundits don't make concrete forecasts that can be assessed. E.g., one can go back over seven years' worth of posts at Marginal Revolution and find a dearth of predictions. Do you make many predictions?
Civilization runs on an illusive narrative. No one really wants to destroy this narrative.
it was killed, supposedly because management decided readers don’t care as much about pundit accuracy as they’d previous thought.Of course that need not have been their real reason – perhaps some folks didn’t like the ratings it was giving to their favorite pundits
These two reasons sound the same; or at least the second reason sounds like a special case of the first.