21 Comments

Insiders were paying attention to software development progress and deliveries to customers (we had about a handful of beta customers in the last year, before Xanadu was shut down when Autodesk reorganized.) None of the active participants had much view into events at the parent company. (I was an employee, even though my name doesn't appear under the link to employees and consultants.)

Another lesson is that it's crucial to have multiple points of view. All the participants in that particular market were software developers.

Expand full comment

Hal, this market was not anonymous, which may indeed have been a mistake.

Expand full comment

Seems like this is a case where anonymity would be important, otherwise employees might be seen as disloyal if they predicted the product would fail. How was that handled at Xanadu?

Expand full comment

As usual, project insiders acted very confident of success, and those of us more on the periphery expressed more doubts. The market price moderated the confidence of the insiders, and had more outsiders been able to participate we would have moderated that confidence even more. The price did fall, but not that far, at least for a while.

Expand full comment

What do you think about the fact that Xanadu employees put their probability of success as high as 70%? This product is somewhat legendary today for its repeated failures despite its high promise. A number of articles have been written (see http://www.wired.com/wired/... from 1995 for example) analyzing the many obstacles and problems which arose over the years to prevent it from succeeding. With hindsight, 70% seems to be a vast overestimate of any rational estimation of its chances of success.

Do you recall what the price trend was? Was the probability falling as time went on?

Expand full comment

Paul, yes it is theoretically possible that prediction markets do not induce people to reveal enough about their opinions, relative to some other kind of forum. Michael Abramowicz has tried to invent variations that do better on this issue, but they have not been tested.

Expand full comment

Do prediction markets trade learning for accuracy? Unlike other methods of making predictions that require something at least passing for a reaspned argument, prediction markets are wholly opaque (and indeed, people with knowledge have an incentive to create false arguments to mislead others into betting against them). Is the trade-off worth it?

Here's a concrete example. I tried to engage in a little noise trading on tradesports during the period of uncertainty before Bush announced the Alito nomination. I ended up losing the (fortunately small) amount of money I put into it when I bet heavily against Alito about 4 hours before the nomination was announced, when he was running very, very high. My reasoning ran as follows: "There is no public information pointing more to Alito than anyone else, and the price seems too high for people with inside information to be responsible for the whole value. Therefore there must be some kind of irrationality at work, perhaps one of Cass Sunstein's cascades. The price should go down." Obviously, this was wrong. But to this day, I don't know why. Maybe people with inside information put a lot more money in the market than I thought. Maybe a lot of people were a lot better at analyzing the political situation than I. And if that's true, I don't know what information or reasoning they used that I missed. Maybe a reliable expert made an announcement, supported by a very convincing argument, that I missed. There's no way for me to learn how everyone other than me knew it was to be Alito four hours before.

By contrast, if some accurate pundit had made the prediction on the basis of arguments and it had turned out true, I'd have new information about the truth of those arguments, and so would the rest of the society. By giving that hypothetical accurate pundit a financial incentive to secretly bet on the result instead of argue for the result, the public was deprived of potentially a lot of information.

Expand full comment

One other thing while I'm thinking of it. While they're similar and probably related, "accurate" and "free from bias" are conceptually different notions.

For instance, Joe could have a bias in favour of his local sports team. If Joe lives in the Bronx, he could always believe that the Yankees will win the World Series. The fact that he'd be right more often than a fan of any other baseball team doesn't prove that his process is unbiased.

We need to consider, in other words, both validity and reliability, not just reliability.

Expand full comment

One thing you can do is use a test and control method. Two ways to do this:

1) Look at two stocks with nearly-identical properties, but with one subject to a test condition (say a "famous" CEO) and one free of that condition. It's not a perfect test (lots of confounding factors), but if you do it enough times you do get a sense for the tendency of the market to be subject to the specific bias you're testing. Many of the papers in the site linked above do just that - compare stocks with media attention to those without media attention, for instance, and find the degree to which media attention biases price.

2) Look at populations with specific biases. For instance, you can compare women's investment behaviour to men's investment behaviour. To the extent that they behave differently you can imply that the market will be biased towards the way that men view value (since far more men are involved in the market than women). Professor Odean found that women are better investors by a statistically significant margin, suggesting to me that they may be able to exploit the inefficiencies in the overall market caused by male biases swamping female biases in overall market pricing.

Expand full comment

Thanks Robin, I'll review the paper.

Nicholas asks a great question about absolute level of error in markets. Unfortunately, Nicholas, the trick is not to compare actual returns versus price, it's to compare risk-adjusted actual returns versus price.

What you end up finding is that the 'rational' risk-adjustment is quite different from the 'observed' risk-adjustment. The observed is also possibly nonlinear and difficult to model theoretically (and tax-distorted and all kinds of other garbage), although interesting inroads have been made using Bayesian models.

So you'd get a very different value between market price and PV of returns, but it would be very diffuclt to say how much of that is due to errors in the market's judgement of risk and how much is due to errors in the market's judgement of the value of the underlying. And how much is due to non-market shifts such as tax distortion or demographic changes (baby boomers buying growth stocks in the 70s and income stocks in the 00s).

Of course, all those errors have should have distortionary effects on the predictive value of markets - it's just hard to do the exact analysis you want to do and draw the conclusion you want to draw, I think. Could be wrong, of course.

Expand full comment

Nicholas, yes, I presume there are statistics of the form you seek, but they are not very relevant. The accuracy of a forecasting mechanism on a forecasting problem varies both because of the mechanism ability and because of the problem difficulty. But problem difficulty varies by far larger factors than mechanism ability plausibly could. Just about any mechanism will have very high accuracy on predicting whether the sun will come up tomorrow, and very low accuracy on predicting which radioactive Carbon-14 atom (with a half-life of 5700 years) will decay during each nanosecond. So when evaluating a mechanism it is crucial to control for problem difficulty.

Expand full comment

Presumably economists have made use of historical data on just how accurate stock markets can be, on the basis of comparing its price at a time with the value of a stock being the present value of all future returns. E.g. comparing the prices of the Dow or FTSE stocks in 1960 with the 1960 PV of their returns over the next 40 years, say. Do we have that accuracy expressed in statistical terms, to give us some idea of the difference between the absolute and relative goodness of markets as predictors?

Expand full comment

Andrew, I quote from my just published "Decision Markets for Policy Advice":

Such markets ... have so far done well in every known head-to-head field comparison with other social institutions that forecast. Orange juice futures improve on National Weather Service forecasts, horse race markets beat horse race experts, Academy Award markets beat columnist forecasts, gas demand markets beat gas demand experts, stock markets beat the official NASA panel at fingering the guilty company in the Challenger accident, election markets beat national opinion polls, and corporate sales markets beat official corporate forecasts.14 http://hanson.gmu.edu/impol...

Expand full comment

Also, we've now each confused the other whether our claims are relative or absolute. You interpreted my claim as relative, I interpreted yours as absolute. Probably worth each of us being more careful with our language, I suppose.

Expand full comment

OK. Is there good evidence of your claim?

I ask because I'm inclined to disbelieve it, given what we know about the madness of crowds. My read on markets is that while they are very efficient ways of tramsmitting information, the information they transmit is only as good as the people providing it (i.e. if many people - or a few very rich people - think Pets.com is a great company, its price will rise, regardless of the stupidity of its business plan).

And given how prone to irrationality people can be, both as individuals and groups, we should hardly expect that buying and selling derivative products is the one area in which people behave rationally.

Expand full comment

I mean my claim to be seen regarding the goal of this forum, overcoming bias. I claim that relative to the other means we have to overcome bias, this approach does well, and to evaluate my claim it is relative bias and accuracy that one should examine.

Expand full comment