A 1990 Corporate Prediction Market

Betting markets have been around for a long time, and as far as I know, until recently they were all created to help traders achieve goals such as hedging, gambling, proving themselves, and so on.  What appears to be new is that non-traders are now creating and subsidizing some markets, in order to gain information by believing the market prices.  Such price estimates are remarkably robust against biases, making this a promising approach to reducing bias.

The earliest such market that I know of was one I helped create at Xanadu, Inc. in 1990. 

Xanadu, then owned by Autodesk, was working on what would have been a different kind of world wide web.   I consulted for them, and had been telling them about my idea futures concept.   In April 1989 the "cold fusion" controversy erupted, and I created a market for Xanadu employees and consultants to bet on this claim:

By 1/1/91 a <1 liter device will have generated over 1 watt of power output more than input from room-T fusion, including amortized power to create/separate components.

This market used a paper-based mechanical market-maker (which loses money on average) placed on a wall of the main common room.  18 people participated, and everyones current stakes were posted for all to see.   The price slowly fell, reaching 7% in May 1990.   At this time the fusion market was replaced with a market on this claim: 

Xanadu will deliver its product before Premier Deng of China dies. 

(They hoped that their product could help China through a post Deng-transition to democracy.)  This market started at a price around 70%, and continued until Xanadu lost its Autodesk support in 1992 (the same year Tim Berners-Lee introduced his version of the web).   Deng died before the product was delivered. 

Support from the Xanadu manager, Marc Stiegler, was crucial.  His goal was to focus employee attention on the risk that Xanadu might not deliver its product soon.  Does anyone know of an earlier example of a speculative market which existed primarily because non-traders wanted to use the information in its prices?

Thanks to Roger Gregory for archiving his email. 

Addendum: The Deng market was Marc Stiegler’s idea; he also authored a 1999 science fiction novel, EarthWeb, wherein prediction markets help humans to fend off aliens. 

GD Star Rating
Tagged as:
Trackback URL:
  • Andrew Edwards

    Such price estimates are remarkably robust against biases, making this a promising approach to reducing bias.

    That would be nice.

    Regrettably, speculative equity markets contain just as much biased behavior as anything else, and there is in fact evidence to that end. See almost every paper here:


  • Andrew, your one link is far from sufficient to support the claim that “speculative equity markets contain just as much biased behavior as anything else.” I just looked through a half dozen of the papers there and didn’t find one that compared markets to anything else, so how could they possibly support your relative bias claim?

  • Andrew Edwards

    Figure of speech. Try: “speculative equity markets contain substantial bias”

  • Andrew, yes, agreed, speculative markets often have substantial bias. But this does not mean they cannot be, as I claimed, “remarkably robust against biases.”

  • Andrew Edwards

    There’s a language-usage problem then. Because I interpret “remarkably robust against bias” as a competing claim to “have substantial bias”.

    Both are absolute, not relative statements, and both make a claim about the bias in the system. One claims that there is a fairly low level of bias, one claims that there is a fairly high level of bias.

    We could disagree about what ‘fairly high’ and ‘fairly low’ mean, and it may be worth defining our terms there. But I don’t see how something can be both ‘fairly high’ and ‘fairly low’.

    Thankfully both can be constructed as falsifiable statements, which may help us define our language usage and will definitely help us determine truth. Care to have a go at a more Popperian declarative?

  • I mean my claim to be seen regarding the goal of this forum, overcoming bias. I claim that relative to the other means we have to overcome bias, this approach does well, and to evaluate my claim it is relative bias and accuracy that one should examine.

  • Andrew Edwards

    OK. Is there good evidence of your claim?

    I ask because I’m inclined to disbelieve it, given what we know about the madness of crowds. My read on markets is that while they are very efficient ways of tramsmitting information, the information they transmit is only as good as the people providing it (i.e. if many people – or a few very rich people – think Pets.com is a great company, its price will rise, regardless of the stupidity of its business plan).

    And given how prone to irrationality people can be, both as individuals and groups, we should hardly expect that buying and selling derivative products is the one area in which people behave rationally.

  • Andrew Edwards

    Also, we’ve now each confused the other whether our claims are relative or absolute. You interpreted my claim as relative, I interpreted yours as absolute. Probably worth each of us being more careful with our language, I suppose.

  • Andrew, I quote from my just published “Decision Markets for Policy Advice”:

    Such markets … have so far done well in every known head-to-head field comparison with other social institutions that forecast. Orange juice futures improve on National Weather Service forecasts, horse race markets beat horse race experts, Academy Award markets beat columnist forecasts, gas demand markets beat gas demand experts, stock markets beat the official NASA panel at fingering the guilty company in the Challenger accident, election markets beat national opinion polls, and corporate sales markets beat official corporate forecasts.14 http://hanson.gmu.edu/impolite.pdf

  • Presumably economists have made use of historical data on just how accurate stock markets can be, on the basis of comparing its price at a time with the value of a stock being the present value of all future returns. E.g. comparing the prices of the Dow or FTSE stocks in 1960 with the 1960 PV of their returns over the next 40 years, say. Do we have that accuracy expressed in statistical terms, to give us some idea of the difference between the absolute and relative goodness of markets as predictors?

  • Nicholas, yes, I presume there are statistics of the form you seek, but they are not very relevant. The accuracy of a forecasting mechanism on a forecasting problem varies both because of the mechanism ability and because of the problem difficulty. But problem difficulty varies by far larger factors than mechanism ability plausibly could. Just about any mechanism will have very high accuracy on predicting whether the sun will come up tomorrow, and very low accuracy on predicting which radioactive Carbon-14 atom (with a half-life of 5700 years) will decay during each nanosecond. So when evaluating a mechanism it is crucial to control for problem difficulty.

  • Andrew Edwards

    Thanks Robin, I’ll review the paper.

    Nicholas asks a great question about absolute level of error in markets. Unfortunately, Nicholas, the trick is not to compare actual returns versus price, it’s to compare risk-adjusted actual returns versus price.

    What you end up finding is that the ‘rational’ risk-adjustment is quite different from the ‘observed’ risk-adjustment. The observed is also possibly nonlinear and difficult to model theoretically (and tax-distorted and all kinds of other garbage), although interesting inroads have been made using Bayesian models.

    So you’d get a very different value between market price and PV of returns, but it would be very diffuclt to say how much of that is due to errors in the market’s judgement of risk and how much is due to errors in the market’s judgement of the value of the underlying. And how much is due to non-market shifts such as tax distortion or demographic changes (baby boomers buying growth stocks in the 70s and income stocks in the 00s).

    Of course, all those errors have should have distortionary effects on the predictive value of markets – it’s just hard to do the exact analysis you want to do and draw the conclusion you want to draw, I think. Could be wrong, of course.

  • Andrew Edwards

    One thing you can do is use a test and control method. Two ways to do this:

    1) Look at two stocks with nearly-identical properties, but with one subject to a test condition (say a “famous” CEO) and one free of that condition. It’s not a perfect test (lots of confounding factors), but if you do it enough times you do get a sense for the tendency of the market to be subject to the specific bias you’re testing. Many of the papers in the site linked above do just that – compare stocks with media attention to those without media attention, for instance, and find the degree to which media attention biases price.

    2) Look at populations with specific biases. For instance, you can compare women’s investment behaviour to men’s investment behaviour. To the extent that they behave differently you can imply that the market will be biased towards the way that men view value (since far more men are involved in the market than women). Professor Odean found that women are better investors by a statistically significant margin, suggesting to me that they may be able to exploit the inefficiencies in the overall market caused by male biases swamping female biases in overall market pricing.

  • Andrew Edwards

    One other thing while I’m thinking of it. While they’re similar and probably related, “accurate” and “free from bias” are conceptually different notions.

    For instance, Joe could have a bias in favour of his local sports team. If Joe lives in the Bronx, he could always believe that the Yankees will win the World Series. The fact that he’d be right more often than a fan of any other baseball team doesn’t prove that his process is unbiased.

    We need to consider, in other words, both validity and reliability, not just reliability.

  • Paul Gowder

    Do prediction markets trade learning for accuracy? Unlike other methods of making predictions that require something at least passing for a reaspned argument, prediction markets are wholly opaque (and indeed, people with knowledge have an incentive to create false arguments to mislead others into betting against them). Is the trade-off worth it?

    Here’s a concrete example. I tried to engage in a little noise trading on tradesports during the period of uncertainty before Bush announced the Alito nomination. I ended up losing the (fortunately small) amount of money I put into it when I bet heavily against Alito about 4 hours before the nomination was announced, when he was running very, very high. My reasoning ran as follows: “There is no public information pointing more to Alito than anyone else, and the price seems too high for people with inside information to be responsible for the whole value. Therefore there must be some kind of irrationality at work, perhaps one of Cass Sunstein’s cascades. The price should go down.” Obviously, this was wrong. But to this day, I don’t know why. Maybe people with inside information put a lot more money in the market than I thought. Maybe a lot of people were a lot better at analyzing the political situation than I. And if that’s true, I don’t know what information or reasoning they used that I missed. Maybe a reliable expert made an announcement, supported by a very convincing argument, that I missed. There’s no way for me to learn how everyone other than me knew it was to be Alito four hours before.

    By contrast, if some accurate pundit had made the prediction on the basis of arguments and it had turned out true, I’d have new information about the truth of those arguments, and so would the rest of the society. By giving that hypothetical accurate pundit a financial incentive to secretly bet on the result instead of argue for the result, the public was deprived of potentially a lot of information.

  • Paul, yes it is theoretically possible that prediction markets do not induce people to reveal enough about their opinions, relative to some other kind of forum. Michael Abramowicz has tried to invent variations that do better on this issue, but they have not been tested.

  • What do you think about the fact that Xanadu employees put their probability of success as high as 70%? This product is somewhat legendary today for its repeated failures despite its high promise. A number of articles have been written (see http://www.wired.com/wired/archive/3.06/xanadu.html from 1995 for example) analyzing the many obstacles and problems which arose over the years to prevent it from succeeding. With hindsight, 70% seems to be a vast overestimate of any rational estimation of its chances of success.

    Do you recall what the price trend was? Was the probability falling as time went on?

  • As usual, project insiders acted very confident of success, and those of us more on the periphery expressed more doubts. The market price moderated the confidence of the insiders, and had more outsiders been able to participate we would have moderated that confidence even more. The price did fall, but not that far, at least for a while.

  • Seems like this is a case where anonymity would be important, otherwise employees might be seen as disloyal if they predicted the product would fail. How was that handled at Xanadu?

  • Hal, this market was not anonymous, which may indeed have been a mistake.

  • Insiders were paying attention to software development progress and deliveries to customers (we had about a handful of beta customers in the last year, before Xanadu was shut down when Autodesk reorganized.) None of the active participants had much view into events at the parent company. (I was an employee, even though my name doesn’t appear under the link to employees and consultants.)

    Another lesson is that it’s crucial to have multiple points of view. All the participants in that particular market were software developers.