Actually, I'm not really sure your priors are right here. Turnover in small business is about 10% and less than that represents the collapse of companies is less, bankruptcy represents occurs to about 6% of businesses. There are other complicating factors that suggest low numbers because businesses aren't people and so "business survival" isn't such a concrete term.
Add in the fact that most businesses seem to be started by people who know how the business operates and what the market is like. So a pretty certain expectation of income and success doesn't seem like a bad assumption.
I think that a lot of our discussions about confidence in business stem primarily from a misinformed view of what the business population is actually like. For example, a lot of studies for overconfidence test CEOs; but that title carries with it prestige and indications of ambition, it's perfectly possible that such studies would be hugely skewed by their own selection. But this basic objection, that a lot of this seems to ignore any... economic anthropology, is my standard issue with a lot of what is said here and in economics generally.
I think the illusion of explanatory depth can serve as a kind of bet that when called to understand the details, we will be able to. The problem is, most folks don't have much experience actually making that transition. This is mostly another aspect of rational ignorance -- if the lazy/far theory works well enough, why waste time fleshing it out?
What I'm trying to say is, IOED is just part of the natural human strategy for survival in the world, and luckily enough, it enables some folks (perhaps by luck, perhaps by dint of native skill in analysis) to make large cognitive leaps which can then be justified by analysis and synthesis, or by science.
Of course, many (most?) leaps end up on the sharp rocks.
The vision analogy is another good analogy providing support for my ideas!
If far mode is like wide-field vision and near mode is like *focused* vision, that means that near mode is just a special case of far mode. If far mode is analogical/narrative reasoning and near mode is Bayesian reasoning, it follows that Bayesian reasoning is just a special case of analogical/narrative reasoning (a case of especially *focused* analogical inference).
Based on this (far-mode) 'vision' analogy, my conclusion seems obvious. But perhaps I am suffering huge over-confidence caused by my swallow understanding of the (near-mode) details of Bayesian inference? ;)
Most outcomes of decisions that we make have exponential, not linear distribution of outcomes. I.e. say you're starting a business, I think your uncertainty is on the scale of orders of magnitude. I.e. it could be a bust, a steady income or make you very rich. It's not like you expect it to make between $200-600k a year with 95% certainty.
Let's say I'm fitting a model for how to make these types of decisions, I'll fit my model on the logarithm of the dependent variable, not a linear model. Now my model gives me predictions of Y for various independent variables X, but because I fit on a log-log the unbiased linear expectation is going to be higher than my what my log-log model says.
Overconfidence bias is a good correction to another bias that comes from the uncertainty in the models we use. I think this is why optimism works best in areas where outcomes do tend to be the most exponential (e.g. technology or entrepreneurs) and the most disastrous where outcomes tend to be more linearly distributed (e.g. finance is the best example I can think of).
We propose regulation of social science journals so that readers are not subjected to superfluous acronym proliferation (SAP). Cox and Hardy (2004) showed that SAP-induced fatigue causes greater impairment of reading comprehension than does that associated with legalistic writing, where long phrases are truncated to a single key word or two.
These critics would, for example, render "the illusion of explanatory depth" as The Illusion rather than IOED.
Although it may seem counterintuitive that lawyerly writing proved easier to understand than writing with SAP, their results were significant at the 0.0001 level. Still, SAP defenders Higglefirth and Blumenstein (forthcoming) have prepared a methodological improvement on the basic model, tentatively titled not quite so superfluous acronym proliferation (NQSSAP). Details remain to be worked out.
Could we even be sure of that? One issue with the lack of resolution in modes of thought is that many people have never learned to increase their mental resolutions to begin with.
You make an analogy with vision. I took an awesome amount of drawing and painting in college. One thing you learn is how to observe objects so that you can draw them with a high degree of fidelity. Likewise, conceptual resolution is a learned capacity. People do not naturally envision complex systems with many moving parts or have a way of incorporating the limitations of knowledge.
It might not be that people are "lazy" but rather that they are untrained.
Far mode is really lazy mode. We are in it by default. It gives us fast conclusions on few details, but that's what we ask of it. It's not the lack of details considered that creates overconfidence, it's the disinterest we have in discovering overconfidence that gives us overconfidence.
It's all a bit like vision. We are overconfident in the quality of our perception of the entire field in front of us. But we can actually divert the fovea to a target, and we know that we need to do this, if we are motivated to do so.
The biggest question in practice is whether we realize the need to focus on a piece of the field, prior to what's in front of us changing and removing that possibility.
Even more so, all of this makes me wonder why people like confidence in others. Particularly others that will do work for them or make recommendations to them.
All of this makes me wonder why people are set up to be overconfident. Alternatively, whether the study measures what it thinks it does.
People might be abstractly confident but concretely unconfident, without being overconfident at all. There isn't dissonance there; people who are pro-market often see abstract/concrete confidence/humility to be a necessary part of their justifications.
In such a case the study might measure the fact that people don't tend to give the two types of cognition different terms when discussing their "understanding".
Actually, I'm not really sure your priors are right here. Turnover in small business is about 10% and less than that represents the collapse of companies is less, bankruptcy represents occurs to about 6% of businesses. There are other complicating factors that suggest low numbers because businesses aren't people and so "business survival" isn't such a concrete term.
Add in the fact that most businesses seem to be started by people who know how the business operates and what the market is like. So a pretty certain expectation of income and success doesn't seem like a bad assumption.
I think that a lot of our discussions about confidence in business stem primarily from a misinformed view of what the business population is actually like. For example, a lot of studies for overconfidence test CEOs; but that title carries with it prestige and indications of ambition, it's perfectly possible that such studies would be hugely skewed by their own selection. But this basic objection, that a lot of this seems to ignore any... economic anthropology, is my standard issue with a lot of what is said here and in economics generally.
I think the illusion of explanatory depth can serve as a kind of bet that when called to understand the details, we will be able to. The problem is, most folks don't have much experience actually making that transition. This is mostly another aspect of rational ignorance -- if the lazy/far theory works well enough, why waste time fleshing it out?
What I'm trying to say is, IOED is just part of the natural human strategy for survival in the world, and luckily enough, it enables some folks (perhaps by luck, perhaps by dint of native skill in analysis) to make large cognitive leaps which can then be justified by analysis and synthesis, or by science.
Of course, many (most?) leaps end up on the sharp rocks.
Also, repeat mjgeddes final thought here.http://www.overcomingbias.c...
The vision analogy is another good analogy providing support for my ideas!
If far mode is like wide-field vision and near mode is like *focused* vision, that means that near mode is just a special case of far mode. If far mode is analogical/narrative reasoning and near mode is Bayesian reasoning, it follows that Bayesian reasoning is just a special case of analogical/narrative reasoning (a case of especially *focused* analogical inference).
Based on this (far-mode) 'vision' analogy, my conclusion seems obvious. But perhaps I am suffering huge over-confidence caused by my swallow understanding of the (near-mode) details of Bayesian inference? ;)
Most outcomes of decisions that we make have exponential, not linear distribution of outcomes. I.e. say you're starting a business, I think your uncertainty is on the scale of orders of magnitude. I.e. it could be a bust, a steady income or make you very rich. It's not like you expect it to make between $200-600k a year with 95% certainty.
Let's say I'm fitting a model for how to make these types of decisions, I'll fit my model on the logarithm of the dependent variable, not a linear model. Now my model gives me predictions of Y for various independent variables X, but because I fit on a log-log the unbiased linear expectation is going to be higher than my what my log-log model says.
Overconfidence bias is a good correction to another bias that comes from the uncertainty in the models we use. I think this is why optimism works best in areas where outcomes do tend to be the most exponential (e.g. technology or entrepreneurs) and the most disastrous where outcomes tend to be more linearly distributed (e.g. finance is the best example I can think of).
Self plug, but here's a discussion I had with Slavisa Tasic on the Illusion of Explanatory Depth (mp3) for the Critical Review journal's alumni site.
We propose regulation of social science journals so that readers are not subjected to superfluous acronym proliferation (SAP). Cox and Hardy (2004) showed that SAP-induced fatigue causes greater impairment of reading comprehension than does that associated with legalistic writing, where long phrases are truncated to a single key word or two.
These critics would, for example, render "the illusion of explanatory depth" as The Illusion rather than IOED.
Although it may seem counterintuitive that lawyerly writing proved easier to understand than writing with SAP, their results were significant at the 0.0001 level. Still, SAP defenders Higglefirth and Blumenstein (forthcoming) have prepared a methodological improvement on the basic model, tentatively titled not quite so superfluous acronym proliferation (NQSSAP). Details remain to be worked out.
Could we even be sure of that? One issue with the lack of resolution in modes of thought is that many people have never learned to increase their mental resolutions to begin with.
You make an analogy with vision. I took an awesome amount of drawing and painting in college. One thing you learn is how to observe objects so that you can draw them with a high degree of fidelity. Likewise, conceptual resolution is a learned capacity. People do not naturally envision complex systems with many moving parts or have a way of incorporating the limitations of knowledge.
It might not be that people are "lazy" but rather that they are untrained.
Far mode is really lazy mode. We are in it by default. It gives us fast conclusions on few details, but that's what we ask of it. It's not the lack of details considered that creates overconfidence, it's the disinterest we have in discovering overconfidence that gives us overconfidence.
It's all a bit like vision. We are overconfident in the quality of our perception of the entire field in front of us. But we can actually divert the fovea to a target, and we know that we need to do this, if we are motivated to do so.
The biggest question in practice is whether we realize the need to focus on a piece of the field, prior to what's in front of us changing and removing that possibility.
Even more so, all of this makes me wonder why people like confidence in others. Particularly others that will do work for them or make recommendations to them.
All of this makes me wonder why people are set up to be overconfident. Alternatively, whether the study measures what it thinks it does.
People might be abstractly confident but concretely unconfident, without being overconfident at all. There isn't dissonance there; people who are pro-market often see abstract/concrete confidence/humility to be a necessary part of their justifications.
In such a case the study might measure the fact that people don't tend to give the two types of cognition different terms when discussing their "understanding".