Back in ’84, when I first started to work at Lockheed Missiles & Space Company, I recall a manager complaining that their US government customer would not accept using decision theory to estimate the optimal thickness of missile walls; they insisted instead on using a crude heuristic expressed in terms of standard deviations of noise. Complex decision theory methods were okay to use for more detailed choices, but not for the biggest ones.
In his excellent 2010 book How to Measure Anything, Douglas W. Hubbard reports that this pattern is common:
Many organizations employ fairly sophisticated risk analysis methods on particular problems; … But those very same organizations do not routinely apply those same sophisticated risk analysis methods to much bigger decisions with more uncertainty and more potential for loss. …
If an organization uses quantitative risk analysis at all, it is usually for routine operational decisions. The largest, most risky decisions get the least amount of proper risk analysis. … Almost all of the most sophisticated risk analysis is applied to less risky operational decisions while the riskiest decisions—mergers, IT portfolios, big research and development initiatives, and the like—receive virtually none.
In fact, while standard decision theory has long been extremely well understood and accepted by academics, most orgs find a wide array of excuses to avoid using it to make key decisions:
For many decision makers, it is simply a habit to default to labeling something as intangible [=unmeasurable] … committees were categorically rejecting any investment where the benefits were “soft.” … In some cases decision makers effectively treat this alleged intangible as a “must have” … I have known managers who simply presume the superiority of their intuition over any quantitative model …
What they seem to take away from these experiences is that to use the methods from statistics one needs a lot of data, that the precise equations don’t deal with messy real-world decisions where we don’t have all of the data, or that one needs a PhD in statistics to use any statistics at all. … I have at times heard that “more advanced” measurements like controlled experiments should be avoided because upper management won’t understand them. … they opt not to engage in a smaller study—even though the costs might be very reasonable—because such a study would have more error than a larger one. …
Measurements can even be perceived as “dehumanizing” an issue. There is often a sense of righteous indignation when someone attempts to measure touchy topics, such as the value of an endangered species or even a human life. … has spent much time refuting objections he encounters—like the alleged “ethical” concerns of “treating a patient like a number” or that statistics aren’t “holistic” enough or the belief that their years of experience are preferable to simple statistical abstractions. … I’ve heard the same objections—sometimes word-for-word—from some managers and policy makers. …
There is a tendency among professionals in every field to perceive their field as unique in terms of the burden of uncertainty. The conversation generally goes something like this: “Unlike other industries, in our industry every problem is unique and unpredictable,” or “Problems in my field have too many factors to allow for quantification,” and so on. …
Resistance to valuing a human life may be part of a fear of numbers in general. Perhaps for these people, a show of righteous indignation is part of a defense mechanism. Perhaps they feel their “innumeracy” doesn’t matter as much if quantification itself is unimportant, or even offensive, especially on issues like these.
Apparently most for-profit firms could make substantially more profits if only they’d use simple decision theory to analyze key decisions. Execs’ usual excuse is that key parameters are unmeasurable, but Hubbard argues convincingly that this is just not true. He suggests that execs seek to excuse poor math abilities, but that seems implausible as an explanation to me.
I say that their motives are more political: execs and their allies gain more by using other more flexible decision making frameworks for key decisions, frameworks with more wiggle room to help them justify whatever decision happens to favor them politically. Decision theory, in contrast, threatens to more strongly recommend a particular hard-to-predict decision in each case. As execs gain when the orgs under them are more efficient, they don’t mind decision theory being used down there. But they don’t want it up at their level and above, for decisions that say if they and their allies win or lose.
I think I saw the same sort of effect when trying to get firms to consider prediction markets; those were okay for small decisions, but for big ones they preferred estimates made by more flexible methods. This overall view is, I think, also strongly supported by the excellent book Moral Mazes by Robert Jackall, which goes into great detail on the many ways that execs play political games while pretending to promote overall org efficiency.
If I ever did a book on The Elephant At The Office: Hidden Motives At Work, this would be a chapter.
Below the fold are many quotes from How to Measure Anything:
Continue reading "Decision Theory Remains Neglected" »
loading...