15 Comments

Greedy, thanks, I've updated the link.

Expand full comment

The link to the paper seems to be bad. Here's another that Google gave me: http://linkinghub.elsevier....

Expand full comment

Michael: point taken. Merely because people invest in VC (and in hedge funds) does not make them superior investment vehicles.

Also, after thinking about it, I think it could be quite challenging to show that my theory about small organizations being more productive than large ones is accurate. Imagine that you picked a few hundred companies of under 100 employees and over 1,000 employees that survived for five years and studied them. You might find that the 100 employee firms were generally better investments, but perhaps it is a question of "nowhere to go but up" more than my theorized improved efficiency in small organizations. Trying to segregate the effects might be very difficult indeed...

Expand full comment

Michael, since you accept a rough correlation between confidence and competence, the question is whether this is an equilibrium, or whether individuals on one side or the other could benefit by deviating from existing behavior patterns. If you think you could evaluate people better by putting more weight on probability competence, you think companies are missing profit opportunities, and so you could make money by doing better than they.

Well to some extent I'm already doing this. I'm in business and I operate the way I said, and I encourage others in my organization to do so. I'm not filthy rich yet, so it clearly doesn't overwhelm negative effects like spending too much time commenting on blogs during the workday :) But I'm confident that I've seen some return from doing this on a careful basis. I don't eliminate people for overconfidence, I try to probe them to see if I can get them to understand the underlying probability with some discussion. If they can't folllow me at all and we theoretically share expertise, I have to assume that they are less bright than I might otherwise believe.

I would guess that *some* level of overconfidence does represent an equilibrium strategy for both sides, but my bet is that the average person errs by trusting this signal too much, and I absolutely put that into practice.

BTW, Perry about VC returns: Even though I think you're probably right about VC returns, I'm not quite so willing to buy the "people put lots of money in them so they must get fabulous returns" logic. Hedge funds have gotten huge amounts of money in the last decade too, and there isn't much evidence that they outperform more typical investments on a risk adjusted basis after you account for their hefty fees. I also believe (based on fairly anecdotal evidence) that there's a large and persistent difference between the returns of the best VCs and that of typical VCs.

Expand full comment

Hal: One would naively expect that a bias towards optimism would increase risk taking, and perhaps that is true, but my experience is that what happens instead is that people are instead foolishly optimistic about the ability to remain on the course currently planned by management. Indeed, pessimism, on the few times that it is acceptable, seems to be oriented against change. Most of my experience has been on the technology infrastructure side of companies, so my viewpoint is necessarily limited.

Douglas: I don't have any statistics at hand. I've been told that repeatedly by people that I trust, but I don't myself have the numbers. The people in question are quite well informed (such as Victor Niederhoffer) but they might be mistaken or I might be misinterpreting their information. However, given the amount of money that has flooded the VC world over the decades, I can only assume that they have some reasonably good story to tell about potential return.

Expand full comment

Perry E. Metzger: "the return on investments in venture capital has traditionally far outstripped the return on investments in larger companies"

cite?My impression is that the ROI on VC is secret.

Expand full comment

There are many things going on within a business firm. Encouraging overconfidence in managers has the effect of enhancing risk-taking. Projects which an unbiased manager would reject may be accepted if overly optimistic estimates guide the decision making process.

Enhancement of risk acceptance can be valuable for a business if there are other factors which cause risk-aversion. One such factor is a perception by managers that the penalties for a failed project are much greater than the rewards for a successful one. This also manifests in the case of a manager with many subordinates; he would prefer a greater degree of risk-taking among them, since successes by one will balance a failure by another, but the subordinates themselves don't benefit from such balance.

If businesses have biases that make them risk-averse, countering biases that promote risk will be beneficial. This relates somewhat to the posting on seen vs unseen biases, although in this case I'm not sure that any are seen much more clearly than others.

The examples above come from Kahneman and Lovallo, chapter 22 ofhttp://print.google.com/pri...

Expand full comment

My opinion is that it is not an equilibrium, and that the reason that startups are profitable and that large companies so rarely succeed at radically new ventures in spite of their resource advantages is because you can run a more profitable business by deviating. I'll expand on this a bit.

My hypothesis is this: large organizations naturally tend towards mismanagement and inefficiency, but are held in check by competition. The tendency towards mismanagement comes from a sort of "Gresham's Law" of managers -- bad managers, or at least mediocre ones, drive out good ones over the long term. In small organizations, the people that succeed tend to be the ones that add the most value to the organization, but over long periods, it becomes much harder to see the impact of individual employees and managers on the overall productivity of the organization, so the value individuals add becomes harder to judge. Once this point is crossed, the people that succeed best are the ones that are most skilled at political infighting and self promotion and not at adding value. They are sort of like a cancer -- in the body, cells that mutate and disobey their programming and look after their own needs ahead of the needs of the whole organism do very well for their genetic line for a brief time, and similarly, managers that place their own needs ahead of those of the company but evade the "corporate immune system" do quite well for themselves for a while. Ultimately, however, cancers kill organisms if they are not kept in check. Similarly, all commercial organizations still need to be at least as efficient as competitors to survive, so this tendency towards "defecting" managers is kept at least modestly at bay. My supposition is that there is a dynamic equilibrium between these two tendencies, though of course almost all companies ultimately fail, whether after six months or one hundred fifty years.

Naively, one would assume that startups would rarely succeed because existing organizations generally have large pools of money, expertise, etc., and could easily enter in to most new areas of business on their own and take advantage of profit opportunities presented by them. I think one advantage startup companies have is that, when they are still quite small, their productivity can be much higher. This is for several closely related reasons that are the converse of the "corporate cancer" issue I just mentioned. In a small organization it is harder to hide whether or not your role is important to the success of the entire venture. In a small organization, the founders have enormous incentives to cut deadweight quickly, and have a good visibility into the activities and thus the productivity of all members of the organization. Of course, some new ventures are poorly run, but in a tight capital/small size situation such ventures die quite quickly, so the ones that survive tend to be quite aggressive and streamlined, and are often far more profitable for their owners than investments in larger organizations -- the return on investments in venture capital has traditionally far outstripped the return on investments in larger companies, and I think this may be part of the reason.

My guess is that the critical point happens when the organizational size crosses the Dunbar Number -- you go quite quickly from a group where everyone knows what everyone does and thus can directly judge competence and productivity to a size where everyone has to rely on proxies for competence and productivity, and, in the long run, skill in looking good wins over skill in being good. Then, of course, at some point enough people are just "looking good" and not enough are being good, and the organization fails.

I emphasize that this model is based largely on guesses and intuitive conclusions based on watching companies, not on good statistical data. (Well, all except the return from venture investing.) It would be interesting to devise experiments to test my hypotheses but until then they should be regarded with significant skepticism.

As a last point, you ask "could you run a more profitable business by deviating and rewarding non-optimistic non-yes-men people" -- my answer is (as I said) on some level yes, but I'll also note that it is in an important respect "no". It is "no" in the sense that I don't think most organizations can be reformed very well any more than aircraft can be repaired in flight very well. It is much easier to start over with new people than it is to fix the dynamics of an existing group. Your question implicitly assumes that an organization that has hit this sort of "management cancer" could somehow choose to reform itself but I'm not sure that is possible. The creative destruction of the markets means this isn't a problem for society -- truly hidebound companies will eventually die and others will replace them -- but it is a problem for the owners of the hidebound companies. Perhaps, though, there is nothing to be done about it. (I certainly can't offer any magic bullet that would work.)

Expand full comment

So Perry, I'll ask you the same question I asked Michael. Is this an equilibrium, or could you run a more profitable business by deviating and rewarding non-optimistic non-yes-men people?

Expand full comment

Robin asks: "Perry, you confirm that overconfidence exists and that accuracy is punished; can you confirm the theory proposed, that confidence is taken as a signal of competence? If not, what is a more plausible theory?"

I would say things slightly differently. I would say that optimism about management goals (i.e. yes-man behavior) is rewarded, even when such optimism is unwarranted, and pessimism and "negativity" (even when realistic) is punished. Part of this may be a question of managers rewarding people who tell them what they want to hear, and part of this may be a bias (which appears to be rather widespread) towards optimistic people in the workplace.

Overconfidence seems like a subset of optimism, while a realistic take on possible failure modes is, I suspect, an instance of a viewpoint that is viewed as "negative".

However, my personal "study" of this is entirely anecdotal and based on informal observation. I do not know if it is correct, and I caution that I was making commentary rather than pretending to have a scientifically demonstrated viewpoint.

That said, an interesting experiment would be to put together some test of people's attitudes towards "optimistic" and "pessimistic" viewpoints in a working context...

Expand full comment

Michael, since you accept a rough correlation between confidence and competence, the question is whether this is an equilibrium, or whether individuals on one side or the other could benefit by deviating from existing behavior patterns. If you think you could evaluate people better by putting more weight on probability competence, you think companies are missing profit opportunities, and so you could make money by doing better than they.

Expand full comment

> Michael, as you note, the fact that the people surveyed correcting described which developers understood probability better> suggests that this is not a problem with understanding probability. Why not accept the proposed theory, that confidence signals> competence?

It's not so much that I don't accept that proposed theory (although as I suggest later, skepticism is still called for on any single result), as that I'm speculating that confidence signalling competence is the result of a common and general lack of understanding of uncertainty. This result suggests that the signal has taken on a life of it's own and operates even for those who do (at least intellectually) understand enough about uncertainty to make certain judgements correctly.

But my own understanding of uncertainty leads me to consciously apply a "caring about getting uncertainties right signals competence" filter to my own decisions, and I do my best to impart the same to anyone I advise. One reason I buy the theory, is that I can see my own reptile brain wanting to apply the confidence = competence signal whenever I am not expert enough to judge the uncertainties. When I *can* judge the uncertainty, failing to apply that information to my competence decisions seems like a failure to carry my uncertainty understanding to it's logical conclusion, which would represent a lack of either real belief or understanding in that uncertainty.

Of course, my own judgements have to be careful as well, in that people who *do* understand uncertainty well, may well downplay that knowledge in giving estimates, because they know that for most people it will be irrelevant or damaging to their competence estimate. Someone's apparent overconfidence may be a result of a conscious spin to boost sales, rather than a lack of understanding of the problem's uncertainties.

Expand full comment

Perry, you confirm that overconfidence exists and that accuracy is punished; can you confirm the theory proposed, that confidence is taken as a signal of competence? If not, what is a more plausible theory?

Michael, as you note, the fact that the people surveyed correcting described which developers understood probability better suggests that this is not a problem with understanding probability. Why not accept the proposed theory, that confidence signals competence?

Expand full comment

I think this is an example of a general bias where most people simply don't get uncertainty very well at all.

I've long had an internal practice when answering questions (either for myself or others) of including in my own thoughts an uncertainty figure, and I try to give some part of that information to people questioning me when I am not actually certain of my answer. People who don't know me well often see that as an indication that I don't know as much as some other person who gives the same answer but always indicates 100% confidence. Some people literally *demand* to know answers with 100% confidence in situations where no one could possibly give such an answer and never be wrong. I believe what's going on is that they don't actually expect answer with 100% confidence so much as that they are compensating for most people's overconfidence. If I say I am 90% confident of an answer, they assume I am like everybody else and they give me credit for being "better than a wild guess" but not much beyond that.

What's most suprising about this result is that the bosses in question appear to have judged that D2 has a better understanding of uncertainty, yet that insight doesn't lead them to question their instincts about how they judge D2's certainty estimates vs. D1s. Amazing.

We get what looks to me like the same underlying bias in the sciences where having a hypothesis be shown true with a 95% probability gets you published and if the result fits enough people's prejudices just so, then everybody starts quoting your result as if it is now a KnownFact[tm] supported by Science[tm]. If you run an experiment that demonstrates your hypothesis with 90% probability, it's a failure and proves nothing and you don't publish anywhere and nobody knows anything about your result (unless you include it along with a related "successful" experiment). Despite the fact that if 3 or 4 different teams repeat an experiment completely separately and all get a 90% result, that probably represents much stronger case than a single 95% result.

I'm extremely skeptical of any scientific result based on a single experiment, but a staggering proporation of what we "know" in certain fields is based on experiments that have never been publically repeated.

This gets compounded by the tendency of journalists and laypeople (and some scientists) to interpret the meaning of results much more widely than the experimenters would ever consider. I was just reading about a fairly typical example of this here:

http://itre.cis.upenn.edu/%...

(BTW site gods: can you use html here? I've tried a few things and they just got stripped. How does one include a proper link in a comment? or italic/boldface?)

This seems to be a bias that the scientifically inclined have to watch out for with great care whenever we step out of fields where we have a good understanding and can get access to and pretty well understand the original research writeups. It's very hard to get important information from most popular science writing. I rarely see science journalism be very accurate in fields with which I have a passing familiarity or a modicum of real expertise. That leads me to question pretty much everything I see where all the methodological cards are not on the table, or where I am not competent to follow the reasoning when they are.

I think this kind of radical skepticism should be highly encouraged. Of course, if I swallow my own pill here, I have to be radically skeptical of the very result you just quoted here.

Expand full comment

I've watched a considerable number of large organizations in my career as a consultant.

In general, honesty with your supervisors in a management context is punished. Being realistic and proving correct is not rewarded -- being foolishly optimistic is rewarded, and the foolishness is most often forgiven later if the optimist is good at politics. Broadly speaking, I've seen that honesty with management generally does not pay.

This is not universal, by the way. This tends to be the case in large organizations where over long periods of time promotion has become disconnected from quality. Most organizations end up this way, though the good ones do not start this way. One of the reasons I think it is very valuable to live in an entrepreneurial society is that progress seems to usually be made by new, agile organizations where competence is valued rather than sclerotic older organizations where political skill is rewarded.

By the way, one of the hardest problems, as a manager, is learning what is actually going on in your organization, because usually, your subordinates have been conditioned through time to reflexively lie to management. They also usually wonder why it is that management is unaware of what "everyone" knows is going on -- the answer being that no one has told management what is going on for fear of being the messenger that gets shot. The trick some managers seem to employ is cultivating spies inside their own organization to tell them what any ordinary person in their group would be able to learn just by virtue of being around.

Expand full comment