Beware the Prescient

I was watching Robin’s speech at O’Reilly open source convention, and he highlights the usefulness of accumulating track records.  I want to highlight a curious bias to those successfully calling improbable events.  I am talking here about events that have no cross section, so we can not calibrate them in our lifetimes (eg,  when something has a 1% chance of happening ever year, as opposed to something like death rates that have a small probability but a wide cross section).  As Allan Greenspan noted in his Congressional testimony a couple weeks ago on the financial crisis:

the fact that there are a lot of people who raised issues about problems emerging. But there are always a lot of people raising issues, and half the time they are wrong. And the question is, what do you do?

Indeed, there are always people predicting imminent financial disaster and they are usually wrong (see Ravi Batra).   That is, from a Bayesian perspective, given a small probability of say 2% of X, and many (but proportionately few) forecasting X, when X happens, should their opinions be upgraded to a probability of 4%?  If 4%, is that sufficiently large that we should make massive adjustments to incorporate their logic? What if this just reminds us they are outliers, emphasizing 2% events?  What if X happened, but it was because of Y, and not Z as they argued ex ante?

With a big issue like the economy there are always many Cassandras who will have called the crash, but then, the question is whether their theory seems helpful.  After all, many critics of capitalism celebrated the Great Depression as vindication of Marx, yet in retrospect this seems to have been a problem independent of either Marx’s conception of the laws of production, or of the relative attractiveness of communism. 

No financial expert focused on credit risk of mortgages as a serious threat to our financial system prior to 2006–though some, like Shiller, noted in 2005 that ‘significant further rises in these markets could lead, eventually, to even more significant declines’, which is typical of the hedged way this warning was framed.  Most of the risk in Fannie and Freddie was focused on too much interest rate risk from the notoriously difficult problem of estimating prepayment risk (Taleb see here, Mankiw here).  Others, like James Grant, Stiglitz or Nouriel Rabini, saw risks from disparate arenas like oil, globalization, or  secular increases in leverage.  Charles Morris’s "The Trillion Dollar Meltdown," David Smick’s "The World is Curved,"  George Soros’s "The New Paradigm for Financial Markets," Kevin Phillips’s "Bad Money," and Peter Schiff’s "Crash Proof," basically cover any possible cause (though I’m waiting for someone to blame it on Global Warming).  Putting a belt and suspenders on all these risks would be a major impediment to future productivity. 

The problem is that when a small probability event happens to a complex system it is not so simple as reverse engineering a bridge collapse or a space shuttle disaster.  The I-35 bridge collapsed in Minneapolis last summer, and it appears the main culprit was a simple gusset plate had insufficient thickness: one-half inch instead of the more appropriate one inch.  Doh!   But in a complex system like an economy there were multiple failures, from regulators and legislators encouraging subprime lending, to the investors taking on too much debt, to the rating agencies saying the more lenient lending standards were immaterial, to investors not appreciating that housing price increases masked underlying credit problem, to no one seeing the cascading implications of an increase in default risk within the non-transparent balance sheets of global financial institutions.  All were necessary, none sufficient, and each of the intellectual errors they made were pretty independent, with very different objectives.  Like the Great Depression, there was not one single bad act in our current problem.

Those most vociferously denouncing the stupidity underlying this mess have a theory that focuses on one cause–though sometimes incredibly vague, like ‘overconfidence’ or ‘greed’–because experts usually have very specific theories (eg, ‘peak oil’), but this neglects the other causes that were also necessary, and so are incomplete.  A robust risk system is a usually based on a theory, but is also quite klugey: it has several, if not 30, ad hoc rules too.  In reverse-engineering a complex system disaster, it is important to realize that high-profile experts with their pet theories tend to overemphasize a small part of the problem.

Disasters that happen one a generation are usually benchmarks for future stress testing.  They highlight flawed assumptions.  But using them to annoint new theories, especially those proposed by ‘correct’ forecasters, is usually not a good idea.  I’m afraid that as the securitization market has eliminated  subprime and Alt-A lending back in 2007 when this problem became apparent, the main remedy to a repeat of this crisis has already occured.  Now the question is, how many additional cures will be heaped on, loaded with pork, special interest rights, subsidies and protections, all hiding behind some permabear theory that  was incorrect for the 25 years prior to 2007.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://reading.kingrat.biz/ King Rat

    Oh please. The run-up in housing prices was completely visible and the housing price bubble popping was completely foreseeable. What form it took perhaps not.

    The shorter version of what your are writing is “A stopped watch is right twice a day.” (Doomsayers are always predicting doom and are bound to be right 1% of the time.)

    That still doesn’t provide any evidence that the people who predicted the bubble popping are stopped watches and shouldn’t be listened to (which is the obvious implication).

  • frelkins

    @Eric

    You may be interested in seeing the public CDS data release today from the DTCC. This will address many questions.

    The original “most correct” callers of the crisis have to be Buffet in March 2003. And also Paul Wilmott, the quant, who made a pest of himself telling risk managers to stop looking at the pig in the snake and start taking a microscope to the tail for oh, about 2 years.

    Finally, David Einhorn warned of near-term catastrophe in June 2008 from this very source, but it had also been pointed out in the Financial Times in a blog post in April on Goldman, as well as a NY Times business piece the first week of August.

    Nouriel maybe just got on TV first. That’s another kind of bias – pundit bias, or what would you call it? – where something isn’t real until it’s been on TV? And the first person to say it on TV gets credit for discovery?

    Since it’s all about track records, let me venture a prediction: the next possible source of contagion is exchange-traded funds. Triple inverse! Yummy bombs, says Kedrosky.

  • John Maxwell

    Real Estate bubbles may be rare in any one country, but there have been many real estate bubbles across the world in the last half century, the most famous being Japan’s real estate bubble in the early 1990’s. There is no way that the Fed could say that it was ignorant of the possibility of a real estate bubble here in the United States.

  • Paul

    King Rat: Sounds to me like the obvious implication of Eric’s post is that people tend to overvalue prescience. It’s not that you should ignore people who have been right in the past because you assume it’s just luck, it’s just that you shouldn’t let the fact that they were right about some major past event bias your confidence in their predictions about future events (or bias your confidence in their ex post story about why they were right).

    Looked to me like the implication was that stopped watches are right twice a day, so you should have to check the watch at least 3 times a day to see if it’s not stopped (to extend the metaphor in a direction that I’m not necessarily 100% comfortable with).

  • http://occludedsun.wordpress.com Caledonian

    Sounds to me like the obvious implication of Eric’s post is that people tend to overvalue prescience.

    Strange… the theme I see in it is that people tend to mistake hindsight for prescience, and so try to apply to the future crafted solutions that would have prevented what they believe to have gone wrong in the past.

    The post seems mistitled. It’s not prescience that’s dangerous, but the appreciation of foresight without respect for your limitations to apply it.

  • http://hanson.gmu.edu Robin Hanson

    This is of course one of the reasons to rely on prediction markets, both for forecasts and to reward those who foresee better than others.

  • eric falkenstein

    Perhaps one of the underlooked benefits of prediction markets is that it allows those who are knowledgeable yet unsuccessful in persuading others to gain satisfaction via a little personal reward as opposed to merely being able to say ‘told ya so’. As most individuals have negligible effect on group decisions, and even with hindsight should not be given too much authority, this at least gives them some benefit.

  • http://silasx.blogspot.com Silas

    And you know a good way to make them even better, Robin? Allow prediction market betters to make unsecured, leveraged bets! =-D

  • eric falkenstein

    Over-the-counter prediction default swaps (PDOs). What could go wrong?

  • Rick

    The problem is that when a small probability event happens to a complex system it is not so simple as reverse engineering a bridge collapse or a space shuttle disaster.

    I’m not sure those are simple either. To see this, ask yourself “Why was the gusset plate only 1/2 inch thick?” or “Why was the shuttle allowed to fly at temperatures too cold for the O-ring?” Behind each of these engineering decisions is a complex human process, much like the complex human processes involved in the current financial crisis. Oversimplifying by saying that “it is just the 1/2 inch gusset plate” is much like saying “Oh, it is just the bad loans being given out by shady people.” Well, yes, but there are many other things that also contributed. For the financial crisis, this would include things like the overuse of risk models, the increasing leverage that is off-the-public-books, crazy incentives to give bad loans, and so on. For the space shuttle (Challenger) example, this includes engineers not communicating very well with management, management pressures to not be the person who stops the shuttle launch, a strange culture of accountability at NASA, political pressures to perform, insufficient data about operating under cold conditions, etc.

    However, I agree that these complex systems have so many variables that it is very difficult to know which ones are binding, and when, and that observing failures is not a good way of evaluating theories about them.

  • Lord

    What do you do?

    You investigate whether something is occurring that shouldn’t be and not just blythly dismiss it.

  • http://rolfnelson.blogspot.com Rolf Nelson

    If you have a large Crank Pool of indistinguishable-to-you apparent-cranks (each of whom has, individually, a low prior probability of not actually being a crank), and at most one of them is actually Not A Crank, and each of them makes a vague enough prediction that multiple cranks will predict correctly by sheer chance, then unfortunately you have no way of completely promoting an apparent-crank out of the Crank Pool based on a single prediction. The best you can do is reshuffle your probability mass within the Crank Pool about who might be the sole non-crank in the pool if one exists (moving probability mass away from the failed apparent-cranks and towards the successful apparent-cranks), but you can’t significantly increase your probability that a non-crank exists in the Crank Pool at all with a single observation in this scenario.

  • ac

    What Eric seems to be saying has an analogue in weather forecasting. In the language of this discipline, verifying forecasts of low probability events is difficult because your sample size is to small to get an accurate picture of skill. And not only do you need to be able to pick the events accurately, your forecast model should predict events with roughly the probability that they actually occur – forecast a hurricane with 100% probability every day and you are a ‘stopped watch’.

    Not that this can really apply in economic forecasting, where not only do we not predict the future by applying well known dynamical rules to the current state, but it’s far from clear that we know what those dynamical rules are. Instead we have a range of experts each emphasising their own special interest bit of sub-dynamics.

    Eric mentions Marx, and no doubt there are those who are again suggesting that the current crisis is a vindication of Marx. These are the people who read ‘The Communist Manfiesto’ pamphlet, and put down Capital before finishing the first chapter (Mick Hume at Spiked has a nice article about this). Or who skipped straight to Trotsky. Marx is interesting for his attempt to describe the dynamical principles behind the capitalist economies of his day. Whether he’s useful today is another question.

  • http://profile.yahoo.com/MCHO7BS5JUN7G2OUOQVNVY7KDY Geraldine

    A reverse mortgage is a type of mortgage typically used by seniors. It reserves a
    portion of the equity in the home as collateral and does not have to be repaid
    until the surviving homeowner moves out or passes away. At that time, the loan
    is either paid off or sold to repay the debt. If there is any equity left over,
    then it is inherited by the estate