19 Comments

The "scientists" in Montserrat seem to ignore the need to use data to back up their opinions. Some are also using laws of physics invented ONLY for Montserrat.

I was well trained in the Scientific Method -- hypothesis testing, awareness of EB (Experimenter Bias) etc. Through the years I've read thousands of journal articles in dozens of research fields -- Cognition, Learning, Neurology, Earth Sciences, Biology, and Behavioral Sciences. In comparison to these journal articles, the "stuff" published by the Montserrat SAC is more science fiction than real science.

Errors of using probability numbers without DATA are found in the UN Climate Report -- that bit about 90% probability that the Himalayan glaciers will melt by 2035 (or 2050). This the same sort of "science" found in the Montserrat Scientific Advisory Committee's reports. Why bother with data when "scientists" can come up with Wild a** guesses (SWAG) and present that as "science".

How can a massive population NOT have an effect on the climate -- history of humans shows that we do impact the environment -- Easter Island was a lab of sorts.

When the truth should be enough -- why have some Climate scientists chosen the SCARE science route? This sort of garbage makes scientists look bad.

The problem with yelling Wolf -- when creature is only a perhaps a mouse -- means that fewer people will believe the ones calling the alert. So that when a real scientist comes along, with a real warning about impending disaster -- that person might not be believed.

Expand full comment

P.S. I am appalled, I do applaud, was meant as Yogism.

Expand full comment

As one who spent many monotonous hours trying to measure the amount of chlorine in one gram of a chemical compound to within 0.001%, I find the enterprise of measuring the temperature the entire Earth's atmosphere to within a hundredth of a degree in doubt at best.

Although I thank God I left science for business, I am applaud those who attempt to collect such lofty and tedious data.

Expand full comment

To hell with trials. Do you think the Wikipedia founders should have first started with trials to test the accuracy and breadth of their method?

If prediction markets are so awesome, then there should be opportunities to use them all around the place. Get yourself a clever idea and do it.

It's not embarassment you need to cultivate, it's jealousy and greed.

Your moonlighting gig counts for this I suppose.

Expand full comment

For what it's worth, the idea of weighing experts is used in machine and statistical learning as well. For example, in classification problems they will combine various classifiers by weighting each according to its effectiveness. (And yes, there are versions that use Bayes' Rule to compute optimal weightings, but that's really neither here nor there.)

Expand full comment

I didn't know that Nature published statistics articles. My impression is that advances in statistics are published in stat journals or sometimes in journals in related fields (econ, poli sci, sociology, psychology, CS, or (in the case of computational methods) physics), but not in Science or Nature. If stat articles appear in Science or Nature, I'm not sure what they're about.

Expand full comment

Andrew, what about Nature statistics articles; are those often incompetent?

Expand full comment

Robin: I disagree with you there. Acceptance into Nature or Science is a bit of a crapshoot. My impression is that the social science papers they publish are sometimes pretty wacky.

Expand full comment

Remember this is the top science journal! That makes mere incompetence rather unlikely as an explanation - very few incompetent articles make it into Nature.

Expand full comment

More constructively, how could we distinguish between these two explanations?

I think the lack of scientific rigor (independent of which possibilities Willy Aspinall chose to consider) points toward incompetence.

Expand full comment

This.

At least in regards to the issue of motivation. Robin Hanson's critique of Willy Aspinall's not requiring field trials still stands.

Expand full comment

@billb, @Darin

If failures are rare, then you can run a market that asks about the probability of any one failing. The Foresight Exchange has had claims about earthquakes for quite a while. So far, they aren't providing any information that isn't already in the USGS estimates.

But if you had situations where you had individual experts (or people with ability to do the research) and no consensus estimate, then setting up a market would elicit a prediction. If you use a variable payout claim based on date, you can get a probability even for unlikely events.

Expand full comment

Re: Weighing Scientists

From my statistical research, scientists do not as a group weigh more than other individuals in the general population.

Expand full comment

What about this method in cases where the correct answer is never revealed? How would you have a prediction market for time to failure of a dam, to use Aspinall's example, given that it's unlikely the dam will ever fail? The market would never close.

It seems to me we are stuck with eliciting expert opinion the old fashioned way for questions like this. Am I missing something?

Expand full comment

I don't see how a prediction market is going to help you predict the eruption of a volcano or the failure of a dam. Can you explain how you'd set up such a market?

Expand full comment

Don't these solve different problems? Prediction markets don't provide a way of making predictions so much as providing an incentive for someone to find a good way to make predictions. Something like the Cooke method might still be used by a prediction market investor.

Expand full comment