Nassim Taleb has an article related to the current financial crisis. While much of what he says is true, he misleads when he implies that the recent collapse of financial companies resulted from a Black Swan. He claims:
use of probabilistic methods for the estimation of risks did just blow up the banking system
Did people who are skeptical of statistical models keep more of their money in the housing/mortgage collapse than those who rely on statistical models? Maybe, but I see little evidence for that claim.
I see a widespread pattern of mistakes that appear to have been committed by a much larger set of people than users of statistical models. Homebuyers and politicians fueled the bubble without using any fancy math.
Availability bias appears to explain more of these mistakes than Taleb’s analysis does. Evidence about housing prices from the U.S. during your lifetime is easier to find and remember than evidence going back a century or covering other countries such as Japan. Figuring out whether an X% drop in home prices would cause your company to collapse isn’t easy, because it depends on the size of the mistakes other companies make. But there are numerous financial panics from which we can derive crude statistical models (although the models may look less convincing than models selected to produce results that look more precise).
People who carefully looked for and evaluated as much relevant evidence as they could saw some chance of the current panic happening, regardless of whether they used intuition or fancy statistical models. Some of them warned of the risk. But it was hard for most people to worry about warnings that had been consistently wrong under all the conditions that were fresh in their minds.
Resisting peer pressure isn’t pleasant. The banker who insisted on a 20% down payment for all mortgages got less business during the bubble and was seen by his colleagues as a burden on the bank and an obstacle to helping customers. The regulator who insisted on a 20% down payment for all mortgages was seen as denying the poor the good investments that were available to the rest of the country, and as an obstacle to home ownership (sometimes better described as home borrowing)(governments think home ownership ought to be encouraged, in spite of (or because of?) its tendency to increase unemployment).
I don’t see how I could have done a good job as a banker or bank regulator in 2004 and 2005 – I would almost certainly have quit due to frustration.
What can be done to prevent bubbles from repeating? If investors study history and other countries more objectively, they’ll do less to fuel the bubbles and will be moderately wealthier as a result, but investors need a good deal of patience and thought to make that effort. Avoiding overconfidence would also help, but much of Taleb’s advice boils down to that, and he more or less admits that few people want to follow this advice.
Overcoming availability bias takes time and effort. Maybe AI will change that someday, but until then it’s hard to hope for more than a modest reduction in the harm done by bubbles.
If a regulation requiring 20% down payments on mortgages could be implemented in a way that is as insulated from politics as the margin rules for stocks, there would be fewer foreclosures after the next housing bubble. But the large companies that are being bailed out have shown they would innovate around any similar restrictions on their leverage.
Taleb also claims:
This absence of "typical" event in Extremistan is what makes prediction markets ludicrous, as they make events look binary.
Prediction markets will fail to answer questions that nobody thinks to ask (does any forecasting method not have this problem?), and will sometimes give the same wrong answers that other methods do. Those are reasons to worry about people becoming overconfident about prediction markets, but not to think that there’s a better alternative. (I don’t think Taleb disagrees, but a careless reading of his essay could easily lead people to think he disagrees more than he actually does).
Prediction markets can be designed to focus on binary outcomes, or they can be designed to produce real-valued predictions (such as "the number of combat deaths in the next 5 years", or to better focus attention on extreme outcomes, the log of that number). If you look past Taleb’s hyperbole, you can see a valid concern that the incentives facing companies such as Intrade are causing them to focus on exciting predictions rather than on producing valuable knowledge.
jor: I agree BS was overlevered. So, sure, mistakes were made, as obvious by their failure. But the idea of Black Swans don't get at their essence. Hubris? I don't think it helps to merely say people should not have hubris--no one intends to have this. Do not have faith in bad models? Again, no one attempts to do this. Expect the unexpected? What does that mean? Taken literally, one should never invest in anything with a gestation period.
I had no idea Inv banks like BS and UBS warehoused so much asset backed securities, which doesn't make much sense, but that error is from bad transfer pricing, because it should have never made economic sense for a bank to borrow at the AA rate to buy AA or AAA Mortgage backed securities. A good transfer pricing system would have stopped that. If you read UBS's report to shareholders, they go over the errors pretty well (which were similar to Bear's), you can google it, released around April of this year. That's a boring reason, but gets at their mistake pretty well. More fundamentally, the failure of mortgages was a function of relaxing underwriting standards over the past 15 years, and I think best laid out it Stan Liebowitz's piece, which is a fascinating tale, but again, the Black Swan is pretty irrelevant to that line of analysis.
Now, to say they repeatedly make the mistake of going bankrupt, anthropomorphizes the market in a silly way. Some people make mistakes every 5 to 10 years in a systematic way, but they are different people, in different fields, making mistakes about different financial structures. But the annual default rate for nonfinancial companies, historically, is around 1%, so they usually do not make mistakes, though it clusters over time (eg, 2001, 1990, 1981, 1970). Every panic, or crisis, is different, because people aren't so dumb as to make the exact same mistake twice. I don't see the Black Swan as a fruitful way to group these mistakes, because 'overconfidence' is merely evidenced by failure in these contexts, like saying don't buy assets that will decline a lot in value.
Combinatorial prediction markets allow one to ask billions of questions at once, which will cover a lot more black swans.How does that work?