White Swans Painted Black

Nassim Taleb has an article related to the current financial crisis. While much of what he says is true, he misleads when he implies that the recent collapse of financial companies resulted from a Black Swan. He claims:

use of probabilistic methods for the estimation of risks did just blow up the banking system

Did people who are skeptical of statistical models keep more of their money in the housing/mortgage collapse than those who rely on statistical models? Maybe, but I see little evidence for that claim.

I see a widespread pattern of mistakes that appear to have been committed by a much larger set of people than users of statistical models. Homebuyers and politicians fueled the bubble without using any fancy math.

Availability bias appears to explain more of these mistakes than Taleb’s analysis does. Evidence about housing prices from the U.S. during your lifetime is easier to find and remember than evidence going back a century or covering other countries such as Japan. Figuring out whether an X% drop in home prices would cause your company to collapse isn’t easy, because it depends on the size of the mistakes other companies make. But there are numerous financial panics from which we can derive crude statistical models (although the models may look less convincing than models selected to produce results that look more precise).

People who carefully looked for and evaluated as much relevant evidence as they could saw some chance of the current panic happening, regardless of whether they used intuition or fancy statistical models. Some of them warned of the risk. But it was hard for most people to worry about warnings that had been consistently wrong under all the conditions that were fresh in their minds.

Resisting peer pressure isn’t pleasant. The banker who insisted on a 20% down payment for all mortgages got less business during the bubble and was seen by his colleagues as a burden on the bank and an obstacle to helping customers. The regulator who insisted on a 20% down payment for all mortgages was seen as denying the poor the good investments that were available to the rest of the country, and as an obstacle to home ownership (sometimes better described as home borrowing)(governments think home ownership ought to be encouraged, in spite of (or because of?) its tendency to increase unemployment).

I don’t see how I could have done a good job as a banker or bank regulator in 2004 and 2005 – I would almost certainly have quit due to frustration.

What can be done to prevent bubbles from repeating? If investors study history and other countries more objectively, they’ll do less to fuel the bubbles and will be moderately wealthier as a result, but investors need a good deal of patience and thought to make that effort. Avoiding overconfidence would also help, but much of Taleb’s advice boils down to that, and he more or less admits that few people want to follow this advice.
Overcoming availability bias takes time and effort. Maybe AI will change that someday, but until then it’s hard to hope for more than a modest reduction in the harm done by bubbles.

If a regulation requiring 20% down payments on mortgages could be implemented in a way that is as insulated from politics as the margin rules for stocks, there would be fewer foreclosures after the next housing bubble. But the large companies that are being bailed out have shown they would innovate around any similar restrictions on their leverage.

Taleb also claims:

This absence of "typical" event in Extremistan is what makes prediction markets ludicrous, as they make events look binary.

Prediction markets will fail to answer questions that nobody thinks to ask (does any forecasting method not have this problem?), and will sometimes give the same wrong answers that other methods do. Those are reasons to worry about people becoming overconfident about prediction markets, but not to think that there’s a better alternative. (I don’t think Taleb disagrees, but a careless reading of his essay could easily lead people to think he disagrees more than he actually does).

Prediction markets can be designed to focus on binary outcomes, or they can be designed to produce real-valued predictions (such as "the number of combat deaths in the next 5 years", or to better focus attention on extreme outcomes, the log of that number). If you look past Taleb’s hyperbole, you can see a valid concern that the incentives facing companies such as Intrade are causing them to focus on exciting predictions rather than on producing valuable knowledge.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • http://goodmorningeconomics.wordpress.com jsalvati

    I don’t really know much about the crisis, but was the problem really that subprime MBSs were widespread? Wouldn’t the ideal have been that subprime MBSs were widespread but people had contingencies for large swings in the default rate?

  • http://hanson.gmu.edu Robin Hanson

    Combinatorial prediction markets allow one to ask billions of questions at once, which will cover a lot more black swans.

  • http://blog.greenideas.com botogol

    I wonder whether a combinatorial prediction market, asking a billion questions at once, would attract many punters.

  • themightypuck

    The current crisis certainly is not a black swan in the classic sense: there have been plenty of financial meltdows/panics. That said, Taleb seems to make a good case for the failure of the markets to deal with risk. I’m not expert enough in statistics to know whether his clever juxtaposition of a turkey and IndyMac is legitimate, but it is a fairy compelling analogy.

  • Joe Torben

    It still remains to be seen who will bear the burden of this bubble collapsing, but there is the definite possibility that taxpayers will foot an incredibly large part of the bill. This is quite unlike the dot com bubble, where clueless investors got burned the most.

    This bubble happened beacuse poor homeowners and greedy bankers *correctly* assessed the market. Any number of things could have happened, but the three most likely were:

    1. The bubble will burst after I get out or after I have made enough off it to come out well anyway. This is essentially true for most people getting in before 2005 or so. (“There are no losses”)

    2. The bubble will burst while I’m still in. This will force me to leave my house and rent, but if I hadn’t bought, I would have rented anyway. I had a house for a few years, so I’m really better off. For the bankers: a few years of extravagant bonuses and a job loss may be better than no job if I hadn’t started at the bank in the first place. However, the major financial burden will fall on the taxpayers. (“Profits are private, losses are socialized”)

    3. The bubble will burst, and those responsible will have to foot the bill. (“Profits and losses are private”)

    Doing what most everyone did was correct in scenario 1 and 2, but incorrect in scenario 3. After the fact, we know that scenario 2 was the one that came to pass. Suggesting that this was in some way unexpected seems a bit silly to me. Suggesting that people should have behaved differenetly, even though we know now that it was in fact in their best interest to act the way they did seems more than a bit silly to me. In fact, that is downright idiotic!

  • http://www.riskanalys.is Nigel Mellish

    When I first read that article, two things immediately came to mind:

    1.) Lots of hindsight and outcome bias used as an emotional trigger rather than a rational look at the situation.

    2.) I have always thought that Taleb has been at least somewhat guilty of mislabeling the long-tail event as a “black swan”. I would think that by definition and in the context of the historical reference, a black swan is something that there is no prior information for. This definition is quite congruous with Popperism.

    Long tail events, I would think, are things that we have prior information for and are thus represented “in the tail”, or am I wrong? It may be a low probability/high impact event, but I can run Monte Carlo sims all day long and see the probability of the event represented in the tail.

    Clearly there were people (myself among them) who felt like there was a housing bubble. Clearly there are people who anticipated the burst (like short sellers or Alan Greenspan whose new firm is up billions of dollars). If the risk was represented, if there is a generous population of people who anticipated the event, then exactly how is this a “black swan”?

  • Robert

    “(governments think home ownership ought to be encouraged, in spite of (or because of?) its tendency to increase unemployment).”

    The linked article there is really shoddy and suffers from the common mistake of assuming correlation = cause.

    Why are there not many renters in Detroit? Why would there be?

    Why are there so many renters in NYC? Its so expensive who wouldn’t rent it out!

    There are countless more points to be made to counter that argument. Very poor journalism, even worse economics.

  • Ben Hill

    Nigel,

    Taleb has been talking the financial model flaws, publicly since 2001. This is not hindsight but elaboration.

    For everyone who “knew” when this bubble was going to burst and made money, there could ten “corpses” in the financial graveyard of people who thought the same thing but their timing was off. I think it’s just as important to look at the losers who “knew” that is was a bubble as the winners.

  • http://www.knackeredhack.com knackeredhack

    Ben,

    I think Taleb was arguing about the short-comings of VaR even as far back as 1997.

    Tim

  • http://riskmarkets.blogspot.com/ Jason Ruspini

    Taleb is basically right, though as many have noted, the usefulness of what he says to thinking people is questionable.

    “Black Swan” is sufficiently vague to do no work in analysis. To start with: how hard to predict? for whom and when?

    Taleb had a *reason* to own puts in 1987 based on specific things he saw happening in the market, not just because one should not be short options in the “fourth quadrant” as a general matter of empiricism. All of that quadrant stuff is just a way of saying that models do more harm than good in some cases. But those cases would also seem to require a cessation of thinking including “naive empiricism”, institutions producing bad incentives, etc. It’s not even clear that the executives at failed companies failed at predicting. Their payoffs were skewed to the *upside* after all.

    And as Peter noted, The Prediction Market is not in the business of providing predictions. I, like everyone, thought about a bailout legislation contract this weekend and determined that the terms of the bailout were too fluid to be usefully captured by a single contract. Instead I wanted a contract that would predict the profitability of the trust (the taxpayers) IF a bill was passed. I was told that such a contract would not “capture the imagination” of traders. Instead we have what is essentially a coin-flip contract on whether some bill, any bill, will be passed by next Tuesday. People seem to enjoy trading that and the exchange will get to sweep fees within a week, so we can understand how that is a better contract.

  • gwern

    > I wonder whether a combinatorial prediction market, asking a billion questions at once, would attract many punters.

    botogol: I think it could, but you’d need some interesting techniques.

    For example, perhaps one could take two parameters as x and y, and the current price as z, and render the billion mini-markets as a 3D mesh; the user could create a second plane encoding how she thinks it could look, and a software agent trades for the user based on differences between the two.

    (This wouldn’t be useful for current markets, I don’t think. You might as well just punch in one number per minimarket, since questions are more or less independent; the visualization wouldn’t be useful.)

  • eric falkenstein

    Taleb’s laments that his students, and those submitting to his guest editorial issue of the International Journal of Forecasting, always want find a ‘better’ predictor. He thinks his refutations of complete generality of parametric distributions is the main take away. Good luck with that.

    It’s trivial to say, if the driving factor has a fat-tailed distribution with few observations to calibrate to, and the function based off that factor is very complicated (his fourth quadrant), chances are forecasts will have high standard errors. But often one is forced to make an estimate, implicitly, and so one is left with identifying the state space and applying probabilities. You can just say, “I have no idea”, but usually one has some idea, and being absolutely certain of the inaccuracy of any estimate, is a strange bias of its own.

    He has been slamming Value at Risk since 1996, but value at risk has only grown in popularity as a tool since then (and he de-linked those criticisms from his website, because they were clearly overstated). VaR is imperfect, but one has to see what it replaced, which was a hodge-podge of idiosyncratic reports that did not scale at all. Science is about compression, explaining more with less, and that’s what VaR does. It is imperfect, but people have been talking about jump diffusion models in option and volatility estimates for a long time, and they are not super popular because they add more problems than they remove (adding parameters is always a trade-off). As per extreme events influencing pricing, that goes back to Reitz and his Peso-problem explanation of the equity risk premium in 1988, which has a considerable literature, and is never adressed by Taleb.

    I think he continually demolishes straw man arguments, ignoring a vast amount of literature that is aware that estimates–even estimates of variance–have standard errors, and are not normally distributed. Stable paretian distributions, and fractals, are not popular because they are inferior to modern kluges found in volatility smiles, variations or Garch models, not because no one seriously tried them (Mandelbrot introduced them in 1962, Chaos was a best seller in 1990).

  • http://www.riskanalys.is Nigel Mellish

    ” Taleb has been talking the financial model flaws, publicly since 2001. This is not hindsight but elaboration.”

    OK, maybe. And as a popular author he suffers the unfortunate need to be fantastic in his storytelling as well as rational in his choice of supporting evidence. And I appreciate it’s not easy to do that. But I still feel like he’s playing on the lack of sophistication in the broader population to recognize and refute cognitive bias in order to sell books.

    Second, if he has been talking about flaws in the financial model – then how is that a black swan? As others are pointing out – he obviously had relevant priors he was using to make that statement.

    “For everyone who “knew” when this bubble was going to burst and made money, there could ten “corpses” in the financial graveyard of people who thought the same thing but their timing was off. I think it’s just as important to look at the losers who “knew” that is was a bubble as the winners.”

    Right! Which seems to be evidence that the world is working in a perfectly normal manner here. It’s not like the “Innovator, Imitator, & Idiot” distribution isn’t known. You can’t expect everyone to be winners. And, back to the point of “white swans painted black” – if there are winners or people like Taleb himself who can anticipate the tendency towards a tail event – how is the occurrence of that event a “black swan”?

  • milieu

    Right! Which seems to be evidence that the world is working in a perfectly normal manner here. It’s not like the “Innovator, Imitator, & Idiot” distribution isn’t known. You can’t expect everyone to be winners. And, back to the point of “white swans painted black” – if there are winners or people like Taleb himself who can anticipate the tendency towards a tail event – how is the occurrence of that event a “black swan”?

    I remember reading in his book that it might be a black swan if it was unexpected from your viewpoint eventhough it was expected from someone else’s. Eg, the killing of turkey was BS for it but not for the butcher. So if the information about the probabiliy of such an occurrence is limited, then it can qualify as a BS. This along with the dramatic impact of the event.

  • James Andrix

    Prediction markets will fail to answer questions that nobody thinks to ask (does any forecasting method not have this problem?)

    Simulation.

  • http://shagbark.livejournal.com Phil Goetz

    The “black swan” question highlights a problem with interpreting prediction markets. A prediction market should predict an expected value. But businesses don’t actually want expected values; they want something closer to the modal value. The expected value incorporates a lot of black-swan situations in which it is more profitable to go bankrupt; and so the business doesn’t want to average in those situations.

    Or perhaps the bidders factor that in when making their bids, so that the prediction doesn’t actually give expected value. (I may think there is a 10% chance that the US will have a revolution and the dollar become worthless; but I don’t factor that into my business operations, because my business will probably go under if that happens, and because I just don’t want to think about that possibility.) In that model, we expect prediction markets not to predict major financial collapses, because planners are only planning for scenarios in which that doesn’t happen.

  • http://profile.typekey.com/bayesian/ Peter McCluskey

    jsalvati, it’s possible that having some kind of subprime MBSs be widespread was a good thing, but it was mostly bad to have subprime loans with down-payments lower than about 20%.

    Joe, what is happening is a mixture of your scenario 2 and scenario 3.

  • http://profile.typekey.com/dbabbitt/ dbabbitt

    There are economic theorists for whom the real estate crises was a predictable event, fully in line with their theory of what a false interest rate would signal to the market. Are any of you including the Federal Reserve’s behavior or the member banks behavior in your calculations?

  • jor

    eric, I’m not in finance or academic statistics — but I don’t understand why you would leverage yourself 30-40x if you don’t have confidence in your tail-estimates? I agree you have to model and make decisions with what you have, but betting 30-40x of what you have to me on a mediocre model seems like the height of stupidity,

  • http://shagbark.livejournal.com Phil Goetz

    Combinatorial prediction markets allow one to ask billions of questions at once, which will cover a lot more black swans.

    How does that work?

  • eric falkenstein

    jor: I agree BS was overlevered. So, sure, mistakes were made, as obvious by their failure. But the idea of Black Swans don’t get at their essence. Hubris? I don’t think it helps to merely say people should not have hubris–no one intends to have this. Do not have faith in bad models? Again, no one attempts to do this. Expect the unexpected? What does that mean? Taken literally, one should never invest in anything with a gestation period.

    I had no idea Inv banks like BS and UBS warehoused so much asset backed securities, which doesn’t make much sense, but that error is from bad transfer pricing, because it should have never made economic sense for a bank to borrow at the AA rate to buy AA or AAA Mortgage backed securities. A good transfer pricing system would have stopped that. If you read UBS’s report to shareholders, they go over the errors pretty well (which were similar to Bear’s), you can google it, released around April of this year. That’s a boring reason, but gets at their mistake pretty well. More fundamentally, the failure of mortgages was a function of relaxing underwriting standards over the past 15 years, and I think best laid out it Stan Liebowitz’s piece, which is a fascinating tale, but again, the Black Swan is pretty irrelevant to that line of analysis.

    Now, to say they repeatedly make the mistake of going bankrupt, anthropomorphizes the market in a silly way. Some people make mistakes every 5 to 10 years in a systematic way, but they are different people, in different fields, making mistakes about different financial structures. But the annual default rate for nonfinancial companies, historically, is around 1%, so they usually do not make mistakes, though it clusters over time (eg, 2001, 1990, 1981, 1970). Every panic, or crisis, is different, because people aren’t so dumb as to make the exact same mistake twice. I don’t see the Black Swan as a fruitful way to group these mistakes, because ‘overconfidence’ is merely evidenced by failure in these contexts, like saying don’t buy assets that will decline a lot in value.