Missing Measurements

Luke Muehlhauser quotes from Douglas Hubbard’s How to Measure Anything:

By 1999, I had completed … analysis on about 20 major [IT] investments. … Each of these business cases had 40 to 80 variables, such as initial development costs, adoption rate, productivity improvement, revenue growth, and so on. For each of these business cases, I ran a macro in Excel that computed the information value for each variable. … [and] I began to see this pattern:

  • The vast majority of variables had an information value of zero. …
  • The variables that had high information values were routinely those that the client had never measured…
  • The variables that clients [spent] the most time measuring were usually those with a very low (even zero) information value. …

Since then, I’ve applied this same test to another 40 projects, and… [I’ve] noticed the same phenomena arise in projects relating to research and development, military logistics, the environment, venture capital, and facilities expansion. (more)

In his book summary at Amazon, Hubbard seems to explain this sort of pattern in terms of misconceptions: read his book to fix the three key misconceptions that keep people from measuring stuff. But the above pattern seems hard to understand as mere random errors in guessing each variable’s info value or measurability.

In my experience trying to sell prediction markets to firms, I’ve noticed that when we suggest they make markets on the specific topics that seem to be of the most info value, they usually express strong reluctance and even hostility. They choose instead to estimate safer safer things, less likely to disrupt the organization.

For example, the most dramatic successes of prediction markets, i.e., where correct market forecasts most differ from official forecasts, are for project deadlines. Yet even hearing this few orgs are interested in starting such markets, and those that do and see dramatic success usually shut them down, and don’t do them again. One plausible explanation is that project managers want the option to say after a failed project “no one could have known about those problems.” Prediction markets instead create a clear record that people did in fact know.

But that is just one reason for one kind of example. It isn’t a general explanation for what seems to be an important general and quite lamentable trend. So why exactly do we spend the most to measure the variables that matter the least, and refuse to even measure the variables that matter most?

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Peter McCluskey

    Signalling the abilities that are used in measurement can motivate measurement without the data measured being valuable. One might want to show off knowledge of the latest measurement tools, care for the opinions of customers, willingness to work hard, ability to analyze big data sets, etc.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      Unless it’s somehow easier to signal through measurement procedures concerning unimportant matters, this doesn’t explain favoring unimportant measurements.

      1. Perhaps it has to do with managerial status. Measurement converts far-mode judgment to near-mode judgment, lowering the status of those making those important decisions.

      2. Another possibility is that the subject of measurement might otherwise be used for signaling. Do managers signal attitudes that are more significant than the accuracy of their estimates on important matters? I wouldn’t think so, however, since the matters are deemed important. I’ll go with #1.

    • http://overcomingbias.com RobinHanson

      That better explains interest in uninformative measure than hostility toward informative ones.

  • kurt9

    You can’t manage what you can’t measure.

  • Chip Morningstar

    There’s a piece of this thesis that I don’t follow. I don’t understand how Hubbard determined that the variables with the highest information values were the ones that were not measured. If they weren’t measured, how could he conclude anything about their value?

    • Doug Hubbard

      Chip,

      The value of additional information is zero once the quantity is known exactly. Information has value only while you still have uncertainty. How much would you pay me to know the orange harvest if you already knew it exactly? Nothing. You would only pay me to measure it if you still had uncertainty that would affect some relevant decision for you. The value of information is a standard and well understood calculation in decision theory.

      What I said in the book is that the items with the highest information value tended to be those items that they would not have measured otherwise. I ask them before I compute the information values what they would have measured or I look at evidence of what they had been measuring for earlier decisions. The high information values are often a surprise.
      Doug Hubbard.

      • Chip Morningstar

        I believe I grasp the concept of information value. What I didn’t get is how you could determine what a variable is worth if you had no measurement of it. The answer is that it was possible to determine the measurement after the fact. This does raise the question of whether there might be other unmeasured variables that would have been predictive but whose magnitudes are unavailable later (though I realize the latter was not what you were studying).

      • Doug Hubbard

        You have an initial state of uncertainty prior to a measurement Measurement further reduces this state of uncertainty. You compute the value of the additional uncertainty reduction while still in the prior state of uncertainty. That actually is the standard decision-theory method for computing EVPI and EVI.
        Doug

  • http://EasyOpinions.blogspot.com/ Andrew_M_Garland

    Mr. Hanson,

    Could you add a few examples, some worthless measures and some of the most worthwhile which are not measured?

    How does an Excel macro determine the information value of a variable? Are these variables with low correlation to the improvement of other measues? Is this judgment made within a single business or is it derived from looking at many businesses in the same industry?

    EasyOpinions

    • http://overcomingbias.com RobinHanson

      Follow the link to Luke’s post, and you’ll see he quotes more detail.

    • http://EasyOpinions.blogspot.com/ Andrew_M_Garland

      To RobinHanson,
      Thanks. I had overlooked the “more” link.

  • http://CommonSenseAtheism.com lukeprog

    Hubbard makes a few guesses as to why he observes this “measurement inversion” (p. 112-113):

    > Why does the Measurement Inversion happen? First, people measure what they know how to measure or what they believe is easy to measure…

    > A second reason… is that managers might tend to measure things that are more likely to produce good news. After all, why measure the benefits if you have a suspicion there might not be any?…

    > Finally, not knowing the business value of the information from a measurement means people can’t put the difficulty of a measurement in context. A measurement they feel is “too difficult” actually might be perceived as practical if they understood that the information value was many times the expected cost.

    • http://overcomingbias.com RobinHanson

      Only the second guess could potentially explain a negative correlation between use and value.

      • Doug Hubbard

        Robin,
        First, thanks for the review. I believe that the first and third items on this list can also be a systemic error. At no point do I or would I suggest that such a persistent pattern is a result of non-biased, random accident. Now that I’ve seen this same pattern on over 80 projects to date, I can confidently say the chance of this being the result of random error is virtually zero.
        In regards to the first item on the list, what they have historically been measuring tend to be those things that (no surprise) have less uncertainty at this point. If you’ve been measuring system downtime for years, you probably have a better benchmark for it when asked for an estimate than something you’ve never measured at all – say, the adoption rate of a new technology by users or the additional revenue from some new product feature. Familiarity with lots of historical examples of a measurement no doubt reduce uncertainty compared to measurements where even a sense of scale is lacking.
        The third item depends on the first. Given that X is something they haven’t been measuring at all, they may have a lot of uncertainty about X and it may have a high information value for a given decision. But they may go on presuming that X is “too difficult” to measure becasue they have no sense of the relative value of measuring it compared to a potentially increased cost of measuring something unfamiliar. If we knew that the information value was $2 million, that might be enough to indicate to a manager that perhaps they should consider moving out of thier comfort zone and learn how to measure something different.
        Doug Hubbard

      • http://overcomingbias.com RobinHanson

        So when you talk about the info value of a parameter, you are talking about the value of knowing that parameter 10% more precisely? In that case I guess it seems obvious that once you know a lot about a variable there is little to be learned by knowing it a little better.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        The hypothesis on offer for why people are hostile to measuring the most informative variables is that they don’t like making unfamiliar measurements, despite the familiar ones having little added value due to their previously having been measured. This doesn’t completely explain the hostility to informative measurements, but it suggests an explanation: human discomfort with the unfamiliar and attraction to the familiar. ( http://en.wikipedia.org/wiki/Mere_exposure_effect )

      • Doug Hubbard

        Yes, that is one of the explanations I offer. They don’t measure the most uncertain variables because they don’t know how to measure them – and they are uncertain because they haven’t been measuring them.
        Doug Hubbard

      • Doug Hubbard

        But they keep measuring what they already know more about – because they know how to measure it. So its not obvious to them. That’s the whole point.
        Doug Hubbard

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        If you explain this seemingly obvious truth to them and they understand
        your explanation, do they stop resisting novel measurements and seek
        guidance on how to make them? Is it that easy? If not, social science needs a further explanation of their resistance, although the explanation may be irrelevant in your role as methodologist. (But it might be relevant in your role as advisor.)

      • Doug Hubbard

        Generally, yes. Of course, if they hire us they feel bit of a commitment to follow through with our recommendations.
        But don’t take for granted how “obvious” this seems to them. The high information value measurements are routinely a surprise, but sometimes obvious only in retrospect.
        Of course, the high uncertainty variables are more likely to be high-information value, depending also on how sensitive the model is to that variable. Part of the problem – which I think I might add to the next edition – is that there appears to be an assumption that if there is a lot of uncertainty, you need a lot of data to measure it. Mathematically speaking, just the opposite is true. They may presume that if something is highly uncertain, they need even more effort to measure it when, in fact, it is in the case of high uncertainty where just a few observations offer the biggest uncertainty reduction. If you know almost nothing, almost anything will tell you something.
        But I don’t claim to know all the reasons why it occurs. These are just my hypothesis. What I have observed is that 1) the measurement inversion exists and 2) when you compute information values there tends to be a high level of acceptance of the need for the measurement (at least among my clients). Further social science research in this would be interesting.
        Doug

  • cd

    Hypothesis: once a measurement is widely known to have high information value, people begin to optimize for that measurement directly instead of for the desired outcome. The information value of the measurement will fall, possibly to zero, as a result.

    • http://overcomingbias.com RobinHanson

      There is no theorem saying that measures tend to become useless as people try to game them.

      • B_For_Bandana

        Isn’t that Goodhart’s Law? Why do you think it doesn’t apply to the cases you’re talking about?

        ETA: But fear of metric-gaming does seem unlikely as a sole explanation for the original problem.

      • A

        Must there be one for cd to form the hypothesis?

  • Christopher Chang

    My guess is that this is a standard principal-agent problem. Too many of the people involved in producing the measurements have a lot to lose from informative results (e.g. some of them may be fired), and not enough to gain to offset the danger. (And measurements which can be taken without active cooperation from existing employees tend to be of limited value.)

    • http://overcomingbias.com RobinHanson

      If agents try to suppress info that could reveal their failures, why don’t those principals insist on that info?

  • http://EasyOpinions.blogspot.com/ Andrew_M_Garland

    I guess this is “Non-Solution Bias”. People tend to measure what is easy to see and understand. Hubbard is not consulted if this works.

    He sees the cases where this didn’t work. Clients were measuring the wrong things and didn’t understand their real problems.

    • Doug Hubbard

      Good point. I have considered that possibility. My clients obviously self-select so this is not a random subset of the population of all decisions. Perhaps it is only a problem for decisions managers already recognize as problematic. Of course, even if the Measurement Inverstion were limited only to complex, difficult decisions, that would be bad enough. But I wonder how much bigger the impact could be if it were an aspect of mundane, routine and extremely frequent decisions. That would be even scarier, I think.

      Doug Hubbard

  • Owen Cotton-Barratt

    Cross-posting from an answer I gave to essentially the same question on Luke’s post:

    One possibility is that there are a very large number of things they could measure, most of which have low information value. If they chose randomly we might expect to see an effect like this, and never notice all the low information possibilities they chose not to measure.

    I’m not suggesting that they actually do choose randomly, but it might be they chose, say, the easiest to measure, and that these are neither systematically good or bad, so it looks similar to random in terms of the useful information.

  • arch1

    Not sure this sheds light but I’m reminded of the anecdote from “Surely You’re Joking Mr. Feynman” in which IIRC Feynman ends up seated next to a royal family member at the Nobel dinner. Royal asks him what he does, Feynman says physics, she says “oh physics, no one knows anything about *that*” by way of dismissing the topic, and Feynman responds something like “on the contrary, people *do* know something about physics, which is precisely why people find it so difficult to discuss,” earning Feynman a memorably icy royal glare..

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    For example, the most dramatic successes of prediction markets, i.e., where correct market forecasts most differ from official forecasts, are for project deadlines.

    Seems you’d have to give one hell of a subsidy to get people to bet on project deadlines. It’s bad news for prediction markets if the most useful applications involve the most boring topics.