I was watching Robin’s speech at O’Reilly open source convention, and he highlights the usefulness of accumulating track records. I want to highlight a curious bias to those successfully calling improbable events. I am talking here about events that have no cross section, so we can not calibrate them in our lifetimes (eg, when something has a 1% chance of happening ever year, as opposed to something like death rates that have a small probability but a wide cross section). As Allan Greenspan noted in his Congressional testimony a couple weeks ago on the financial crisis:
the fact that there are a lot of people who raised issues about problems emerging. But there are always a lot of people raising issues, and half the time they are wrong. And the question is, what do you do?
Indeed, there are always people predicting imminent financial disaster and they are usually wrong (see Ravi Batra). That is, from a Bayesian perspective, given a small probability of say 2% of X, and many (but proportionately few) forecasting X, when X happens, should their opinions be upgraded to a probability of 4%? If 4%, is that sufficiently large that we should make massive adjustments to incorporate their logic? What if this just reminds us they are outliers, emphasizing 2% events? What if X happened, but it was because of Y, and not Z as they argued ex ante?
With a big issue like the economy there are always many Cassandras who will have called the crash, but then, the question is whether their theory seems helpful. After all, many critics of capitalism celebrated the Great Depression as vindication of Marx, yet in retrospect this seems to have been a problem independent of either Marx’s conception of the laws of production, or of the relative attractiveness of communism.
No financial expert focused on credit risk of mortgages as a serious threat to our financial system prior to 2006–though some, like Shiller, noted in 2005 that ‘significant further rises in these markets could lead, eventually, to even more significant declines’, which is typical of the hedged way this warning was framed. Most of the risk in Fannie and Freddie was focused on too much interest rate risk from the notoriously difficult problem of estimating prepayment risk (Taleb see here, Mankiw here). Others, like James Grant, Stiglitz or Nouriel Rabini, saw risks from disparate arenas like oil, globalization, or secular increases in leverage. Charles Morris’s "The Trillion Dollar Meltdown," David Smick’s "The World is Curved," George Soros’s "The New Paradigm for Financial Markets," Kevin Phillips’s "Bad Money," and Peter Schiff’s "Crash Proof," basically cover any possible cause (though I’m waiting for someone to blame it on Global Warming). Putting a belt and suspenders on all these risks would be a major impediment to future productivity.
The problem is that when a small probability event happens to a complex system it is not so simple as reverse engineering a bridge collapse or a space shuttle disaster. The I-35 bridge collapsed in Minneapolis last summer, and it appears the main culprit was a simple gusset plate had insufficient thickness: one-half inch instead of the more appropriate one inch. Doh! But in a complex system like an economy there were multiple failures, from regulators and legislators encouraging subprime lending, to the investors taking on too much debt, to the rating agencies saying the more lenient lending standards were immaterial, to investors not appreciating that housing price increases masked underlying credit problem, to no one seeing the cascading implications of an increase in default risk within the non-transparent balance sheets of global financial institutions. All were necessary, none sufficient, and each of the intellectual errors they made were pretty independent, with very different objectives. Like the Great Depression, there was not one single bad act in our current problem.
Those most vociferously denouncing the stupidity underlying this mess have a theory that focuses on one cause–though sometimes incredibly vague, like ‘overconfidence’ or ‘greed’–because experts usually have very specific theories (eg, ‘peak oil’), but this neglects the other causes that were also necessary, and so are incomplete. A robust risk system is a usually based on a theory, but is also quite klugey: it has several, if not 30, ad hoc rules too. In reverse-engineering a complex system disaster, it is important to realize that high-profile experts with their pet theories tend to overemphasize a small part of the problem.
Disasters that happen one a generation are usually benchmarks for future stress testing. They highlight flawed assumptions. But using them to annoint new theories, especially those proposed by ‘correct’ forecasters, is usually not a good idea. I’m afraid that as the securitization market has eliminated subprime and Alt-A lending back in 2007 when this problem became apparent, the main remedy to a repeat of this crisis has already occured. Now the question is, how many additional cures will be heaped on, loaded with pork, special interest rights, subsidies and protections, all hiding behind some permabear theory that was incorrect for the 25 years prior to 2007.