I was watching Robin’s speech at O’Reilly open source convention, and he highlights the usefulness of accumulating track records. I want to highlight a curious bias to those successfully calling improbable events. I am talking here about events that have no cross section, so we can not calibrate them in our lifetimes (eg, when something has a 1% chance of happening ever year, as opposed to something like death rates that have a small probability but a wide cross section). As Allan Greenspan
A reverse mortgage is a type of mortgage typically used by seniors. It reserves aportion of the equity in the home as collateral and does not have to be repaiduntil the surviving homeowner moves out or passes away. At that time, the loanis either paid off or sold to repay the debt. If there is any equity left over,then it is inherited by the estate
What Eric seems to be saying has an analogue in weather forecasting. In the language of this discipline, verifying forecasts of low probability events is difficult because your sample size is to small to get an accurate picture of skill. And not only do you need to be able to pick the events accurately, your forecast model should predict events with roughly the probability that they actually occur - forecast a hurricane with 100% probability every day and you are a 'stopped watch'.
Not that this can really apply in economic forecasting, where not only do we not predict the future by applying well known dynamical rules to the current state, but it's far from clear that we know what those dynamical rules are. Instead we have a range of experts each emphasising their own special interest bit of sub-dynamics.
Eric mentions Marx, and no doubt there are those who are again suggesting that the current crisis is a vindication of Marx. These are the people who read 'The Communist Manfiesto' pamphlet, and put down Capital before finishing the first chapter (Mick Hume at Spiked has a nice article about this). Or who skipped straight to Trotsky. Marx is interesting for his attempt to describe the dynamical principles behind the capitalist economies of his day. Whether he's useful today is another question.
If you have a large Crank Pool of indistinguishable-to-you apparent-cranks (each of whom has, individually, a low prior probability of not actually being a crank), and at most one of them is actually Not A Crank, and each of them makes a vague enough prediction that multiple cranks will predict correctly by sheer chance, then unfortunately you have no way of completely promoting an apparent-crank out of the Crank Pool based on a single prediction. The best you can do is reshuffle your probability mass within the Crank Pool about who might be the sole non-crank in the pool if one exists (moving probability mass away from the failed apparent-cranks and towards the successful apparent-cranks), but you can't significantly increase your probability that a non-crank exists in the Crank Pool at all with a single observation in this scenario.
The problem is that when a small probability event happens to a complex system it is not so simple as reverse engineering a bridge collapse or a space shuttle disaster.
I'm not sure those are simple either. To see this, ask yourself "Why was the gusset plate only 1/2 inch thick?" or "Why was the shuttle allowed to fly at temperatures too cold for the O-ring?" Behind each of these engineering decisions is a complex human process, much like the complex human processes involved in the current financial crisis. Oversimplifying by saying that "it is just the 1/2 inch gusset plate" is much like saying "Oh, it is just the bad loans being given out by shady people." Well, yes, but there are many other things that also contributed. For the financial crisis, this would include things like the overuse of risk models, the increasing leverage that is off-the-public-books, crazy incentives to give bad loans, and so on. For the space shuttle (Challenger) example, this includes engineers not communicating very well with management, management pressures to not be the person who stops the shuttle launch, a strange culture of accountability at NASA, political pressures to perform, insufficient data about operating under cold conditions, etc.
However, I agree that these complex systems have so many variables that it is very difficult to know which ones are binding, and when, and that observing failures is not a good way of evaluating theories about them.
Perhaps one of the underlooked benefits of prediction markets is that it allows those who are knowledgeable yet unsuccessful in persuading others to gain satisfaction via a little personal reward as opposed to merely being able to say 'told ya so'. As most individuals have negligible effect on group decisions, and even with hindsight should not be given too much authority, this at least gives them some benefit.
Sounds to me like the obvious implication of Eric's post is that people tend to overvalue prescience. Strange... the theme I see in it is that people tend to mistake hindsight for prescience, and so try to apply to the future crafted solutions that would have prevented what they believe to have gone wrong in the past.
The post seems mistitled. It's not prescience that's dangerous, but the appreciation of foresight without respect for your limitations to apply it.
King Rat: Sounds to me like the obvious implication of Eric's post is that people tend to overvalue prescience. It's not that you should ignore people who have been right in the past because you assume it's just luck, it's just that you shouldn't let the fact that they were right about some major past event bias your confidence in their predictions about future events (or bias your confidence in their ex post story about why they were right).
Looked to me like the implication was that stopped watches are right twice a day, so you should have to check the watch at least 3 times a day to see if it's not stopped (to extend the metaphor in a direction that I'm not necessarily 100% comfortable with).
Real Estate bubbles may be rare in any one country, but there have been many real estate bubbles across the world in the last half century, the most famous being Japan's real estate bubble in the early 1990's. There is no way that the Fed could say that it was ignorant of the possibility of a real estate bubble here in the United States.
You may be interested in seeing the public CDS data release today from the DTCC. This will address many questions.
The original "most correct" callers of the crisis have to be Buffet in March 2003. And also Paul Wilmott, the quant, who made a pest of himself telling risk managers to stop looking at the pig in the snake and start taking a microscope to the tail for oh, about 2 years.
Finally, David Einhorn warned of near-term catastrophe in June 2008 from this very source, but it had also been pointed out in the Financial Times in a blog post in April on Goldman, as well as a NY Times business piece the first week of August.
Nouriel maybe just got on TV first. That's another kind of bias - pundit bias, or what would you call it? - where something isn't real until it's been on TV? And the first person to say it on TV gets credit for discovery?
Since it's all about track records, let me venture a prediction: the next possible source of contagion is exchange-traded funds. Triple inverse! Yummy bombs, says Kedrosky.
Oh please. The run-up in housing prices was completely visible and the housing price bubble popping was completely foreseeable. What form it took perhaps not.
The shorter version of what your are writing is "A stopped watch is right twice a day." (Doomsayers are always predicting doom and are bound to be right 1% of the time.)
That still doesn't provide any evidence that the people who predicted the bubble popping are stopped watches and shouldn't be listened to (which is the obvious implication).
A reverse mortgage is a type of mortgage typically used by seniors. It reserves aportion of the equity in the home as collateral and does not have to be repaiduntil the surviving homeowner moves out or passes away. At that time, the loanis either paid off or sold to repay the debt. If there is any equity left over,then it is inherited by the estate
What Eric seems to be saying has an analogue in weather forecasting. In the language of this discipline, verifying forecasts of low probability events is difficult because your sample size is to small to get an accurate picture of skill. And not only do you need to be able to pick the events accurately, your forecast model should predict events with roughly the probability that they actually occur - forecast a hurricane with 100% probability every day and you are a 'stopped watch'.
Not that this can really apply in economic forecasting, where not only do we not predict the future by applying well known dynamical rules to the current state, but it's far from clear that we know what those dynamical rules are. Instead we have a range of experts each emphasising their own special interest bit of sub-dynamics.
Eric mentions Marx, and no doubt there are those who are again suggesting that the current crisis is a vindication of Marx. These are the people who read 'The Communist Manfiesto' pamphlet, and put down Capital before finishing the first chapter (Mick Hume at Spiked has a nice article about this). Or who skipped straight to Trotsky. Marx is interesting for his attempt to describe the dynamical principles behind the capitalist economies of his day. Whether he's useful today is another question.
If you have a large Crank Pool of indistinguishable-to-you apparent-cranks (each of whom has, individually, a low prior probability of not actually being a crank), and at most one of them is actually Not A Crank, and each of them makes a vague enough prediction that multiple cranks will predict correctly by sheer chance, then unfortunately you have no way of completely promoting an apparent-crank out of the Crank Pool based on a single prediction. The best you can do is reshuffle your probability mass within the Crank Pool about who might be the sole non-crank in the pool if one exists (moving probability mass away from the failed apparent-cranks and towards the successful apparent-cranks), but you can't significantly increase your probability that a non-crank exists in the Crank Pool at all with a single observation in this scenario.
What do you do?
You investigate whether something is occurring that shouldn't be and not just blythly dismiss it.
The problem is that when a small probability event happens to a complex system it is not so simple as reverse engineering a bridge collapse or a space shuttle disaster.
I'm not sure those are simple either. To see this, ask yourself "Why was the gusset plate only 1/2 inch thick?" or "Why was the shuttle allowed to fly at temperatures too cold for the O-ring?" Behind each of these engineering decisions is a complex human process, much like the complex human processes involved in the current financial crisis. Oversimplifying by saying that "it is just the 1/2 inch gusset plate" is much like saying "Oh, it is just the bad loans being given out by shady people." Well, yes, but there are many other things that also contributed. For the financial crisis, this would include things like the overuse of risk models, the increasing leverage that is off-the-public-books, crazy incentives to give bad loans, and so on. For the space shuttle (Challenger) example, this includes engineers not communicating very well with management, management pressures to not be the person who stops the shuttle launch, a strange culture of accountability at NASA, political pressures to perform, insufficient data about operating under cold conditions, etc.
However, I agree that these complex systems have so many variables that it is very difficult to know which ones are binding, and when, and that observing failures is not a good way of evaluating theories about them.
Over-the-counter prediction default swaps (PDOs). What could go wrong?
And you know a good way to make them even better, Robin? Allow prediction market betters to make unsecured, leveraged bets! =-D
Perhaps one of the underlooked benefits of prediction markets is that it allows those who are knowledgeable yet unsuccessful in persuading others to gain satisfaction via a little personal reward as opposed to merely being able to say 'told ya so'. As most individuals have negligible effect on group decisions, and even with hindsight should not be given too much authority, this at least gives them some benefit.
This is of course one of the reasons to rely on prediction markets, both for forecasts and to reward those who foresee better than others.
Sounds to me like the obvious implication of Eric's post is that people tend to overvalue prescience. Strange... the theme I see in it is that people tend to mistake hindsight for prescience, and so try to apply to the future crafted solutions that would have prevented what they believe to have gone wrong in the past.
The post seems mistitled. It's not prescience that's dangerous, but the appreciation of foresight without respect for your limitations to apply it.
King Rat: Sounds to me like the obvious implication of Eric's post is that people tend to overvalue prescience. It's not that you should ignore people who have been right in the past because you assume it's just luck, it's just that you shouldn't let the fact that they were right about some major past event bias your confidence in their predictions about future events (or bias your confidence in their ex post story about why they were right).
Looked to me like the implication was that stopped watches are right twice a day, so you should have to check the watch at least 3 times a day to see if it's not stopped (to extend the metaphor in a direction that I'm not necessarily 100% comfortable with).
Real Estate bubbles may be rare in any one country, but there have been many real estate bubbles across the world in the last half century, the most famous being Japan's real estate bubble in the early 1990's. There is no way that the Fed could say that it was ignorant of the possibility of a real estate bubble here in the United States.
@Eric
You may be interested in seeing the public CDS data release today from the DTCC. This will address many questions.
The original "most correct" callers of the crisis have to be Buffet in March 2003. And also Paul Wilmott, the quant, who made a pest of himself telling risk managers to stop looking at the pig in the snake and start taking a microscope to the tail for oh, about 2 years.
Finally, David Einhorn warned of near-term catastrophe in June 2008 from this very source, but it had also been pointed out in the Financial Times in a blog post in April on Goldman, as well as a NY Times business piece the first week of August.
Nouriel maybe just got on TV first. That's another kind of bias - pundit bias, or what would you call it? - where something isn't real until it's been on TV? And the first person to say it on TV gets credit for discovery?
Since it's all about track records, let me venture a prediction: the next possible source of contagion is exchange-traded funds. Triple inverse! Yummy bombs, says Kedrosky.
Oh please. The run-up in housing prices was completely visible and the housing price bubble popping was completely foreseeable. What form it took perhaps not.
The shorter version of what your are writing is "A stopped watch is right twice a day." (Doomsayers are always predicting doom and are bound to be right 1% of the time.)
That still doesn't provide any evidence that the people who predicted the bubble popping are stopped watches and shouldn't be listened to (which is the obvious implication).